Synopsys, Inc. (SNPS)
NASDAQ: SNPS · Real-Time Price · USD
500.82
+43.97 (9.62%)
At close: Apr 24, 2026, 4:00 PM EDT
500.01
-0.81 (-0.16%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

SNUG Silicon Valley 2025

Mar 19, 2025

Speaker 1

Good morning. I was wondering why you were so quiet for a while. They said they're reading the legal disclosure, so I hope you read it carefully. I'm so excited to be here and welcome you to our 35th SNUG. Thirty-five years of this conference and the vitality, the energy is amazing. Thanks to you. You know, I asked Aart, how did SNUG start? It was only three years after the company was founded. We decided, given the technology was truly disruptive, transformative with synthesis, to bring customer feedback. You guys love to give us feedback, and we welcome the feedback. Thank you for keeping that event thriving and strong 35 years into it. This year we have, as Anne mentioned, a special co-location of a Synopsys Executive Forum.

You're going to see, as you're mingling, not only the users, but the users plus our customers, executives, some media, some analysts, investors. Welcome, and I hope you'll enjoy the next couple of days. What an exciting, exciting time to be in our industry. It's amazing the number of technology and the speed, the pace in which things are moving. We are in this era of pervasive intelligence where it's truly promising to deliver incredible disruption and innovations and advancements to humankind. There will be an explosion of new products. Those products will be software-defined intelligent systems powered by AI silicon. What an incredible time to be an engineer. It's really such a special time to be an engineer, given all the opportunities for innovation and to deal with that pace and complexity of innovation. Now, I want to go through some examples of what is possible.

First, imagining the prediction of infection diseases where, throughout history, humanity has faced devastating pandemics, most recently with COVID. The disruption on the supply chain, on the economy, not to mention millions of deaths. In the past 20 years alone, there have been more pandemics and epidemics than the prior 100 years. It's essential that we use technology to predict and understand and prevent pandemics. Today, innovative companies such as Blue Dot are leveraging AI to analyze massive amounts of unstructured data, 65-plus languages, trying to predict diseases and the impact and the spread before it accelerates. Another example is faster drug discovery. According to a recent NYU study, the risk of developing dementia at any time after age 55 among Americans is 42%. That's roughly 11% of adults aged 65 and older who have dementia. These alarming statistics underscore the urgent need to find cures.

Already, there are positive signs by leveraging AI and quantum to shorten the drug discovery from 10-plus years to two years with higher success rates. I'm confident with AI and technology advancements, we will soon be able to solve these very complex challenges. Now, let's look at a few examples that are closer and will have an impact on our daily lives. Consider Helix, an AI-powered robot. This is not just any robot. This robot is a generalist vision, language, action model robot where it can unify perception, understanding, and learned control. Historically, the robot is a robot, meaning they perform on hard-coded actions. With this robot, it has the ability to learn, reason, then take actions.

This is a fusion of the latest in AI technology and robotics, or humanoid robotics, and the engineering of the electronics, as well as the physical mechanical aspect, and bringing it together with a massive workload is an engineering feat. Now, a few months ago, SpaceX flew a 250-ton, 233-foot super heavy booster back to its launching pad. This custom-built tower caught this plummeting massive object traveling half the speed of sound back to the original launching tower. We are witnessing this explosion of new inventions that are AI-powered intelligent systems running on advanced silicon. These are very complex and difficult to design. The pressure engineers are feeling today is not only complexity; it is complexity and pace by when they need to deliver these products, as well, of course, as the cost and affordability. Despite the exponential design complexity, the pace of innovation has been accelerating.

To build these AI products, you need highly efficient silicon. The silicon and systems design world are absolutely converging, compounding complexity and also creating an opportunity for innovation. That's why we're talking about how do we re-engineer the engineering of these products in order to deliver on these opportunities of the future. When we talk about the era of pervasive intelligence, where AI and smart technologies are omnipresent, interconnected, and seamlessly integrated into the fabric of our daily lives, the increasing software-defined intelligent systems and that proliferation of silicon is what we are striving to provide solutions for you to deliver on these promises and exciting products. This systemic complexity and the relentless race to market is impacting every industry. This is no longer about an EDA challenge or about a physics challenge or mechanical challenge only. They're all compounding and intersecting.

More than ever, our engineers are facing truly an exponential challenge. How do we think differently in terms of our workflows and for us and our ability to deliver these future products? What I call the ingenuity of engineering is when you over-constrain a problem and you're still expected to deliver the step function improvement on the product. There is not a better example than in the last number of decades what Moore's Law has achieved, where there are limitations that come from physics, from architecture, from manufacturing, et cetera. The ingenuity of engineering truly came together to continue on that pace and rhythm of innovation. Now, we cannot do it alone. We're working with a number of ecosystem partners.

Now, with the age of AI, we are working with companies from NVIDIA to OpenAI and Microsoft in order to deliver to that future that I'm describing. As far as Microsoft goes, we have a long-standing relationship with Microsoft. I remember the early conversations with Microsoft leadership: how do we bring together our EDA product and optimize them on their silicon and on Azure in order to reduce the total cost of ownership for them and for the customer, the semiconductor companies that will be using Azure to provide hardware capacity to design these complex chips? Today, we have our EDA products already ported on Azure and specified hardware. I'll talk about this later on what we are doing to optimize our technology on the various compute infrastructure and compute architecture.

The second level of partnership with Microsoft was to leverage the Copilot technology as well as the OpenAI LLMs in order to bring it into our own generative AI assistant for chip design. I'm honored to have Satya join me here and have a brief conversation on how do we see the world of AI, silicon to system and solutions that we have an opportunity to deliver on. Satya, great to see you.

Speaker 2

It's fantastic being with you, Sassine. It's just wonderful to partner with you and have a chance to chat today.

Speaker 1

Thank you. Just to give you a sense, I don't have actual statistics, but I'll say roughly about 80% of the people you're talking to are silicon folks in here, OK? I'll say the other 20% are a mix of software and systems. I remember the first time we spoke, you made it a point to remind me that your roots come from silicon with your double E background and, of course, your time at Sun where you were more at the system level and, of course, at Microsoft software. What an interesting journey to come back right now at Microsoft that you're very focused on the full stack from silicon to system. Describe to us why and how do you see, as we look ahead, these opportunities?

Speaker 2

You know, it's fantastic to have a chance to talk to a room full of people who are deep silicon and systems people. As a failed silicon engineer, you know what happens. You get to talk to you as opposed to doing silicon engineering. Really, I think it's just an unbelievable time, Sassine. To me, it reminds me a little bit of when I started in the tech industry in the late 1980s, early 1990s, because in some sense, it's a golden era, right? At that time, I remember Patterson's book had come out. Everybody was like, wow, there is a real movement here. It feels like that to me, where there is a new book that needs to be written on exactly what is happening.

If I sort of see even the fleet in Azure today, it looks unlike anything, the one that I got started in like 15 years ago. Everything, the considerations for the data center design, the power draw to the network, to the compute, to the storage, and then, of course, the silicon systems itself. I think that's what's happening. The question to me, I think, that you asked is, what's driving all this? It's kind of classic Moore's Law on hyperdrive. I think at some level, what we are seeing is these scaling laws are essentially everybody was talking about bemoaning the fact that Moore's Law is ending, except we now have found a new set of S curves. That's, I think, the unique thing about our tech industry is it's not about even one S curve. It's about multiple S curves.

You have scaling laws which worked in pre-training. It's not that pre-training scaling laws are over. It's just that we found another scaling law for post, I mean, basically test time compute and reasoning. Both of these are driving what I think are unbelievable capabilities, which, of course, you yourself are using to speed up even silicon and system design. We are using it for knowledge work and productivity and software development. I think that that's the exciting thing we're seeing.

Speaker 1

No, exactly. Now, when we talk about silicon to systems, of course, at the silicon level, as you mentioned, Moore's Law has been a driving force to continue the opportunity of silicon to deliver better performance, better power, et cetera. Now, with the complexity of the workloads and AI, we have to think differently. We have to think from the workload down to silicon. Of course, as you're designing the silicon, how do you need to customize it in order to optimize for the software? I know Microsoft is doing an incredible job along that stack. If you can take a minute maybe and describe what is it that you're doing and why.

Speaker 2

Yeah, I would say I start from the very top, Sassine, just to kind of give you a flavor for even my own belief on why is this different. We are building these new, I call it, systems of intelligence, right? Let's just take something like GitHub, Copilot, right? First, I remember what about three and a half years ago is when I first saw code completions, right? I mean, software engineers are also as skeptical as silicon engineers. We said, will code, I mean, this AI thing amount to anything? Will it really work? It started working. Code completions were magical, right? Because we were working on IntelliSense for decades. Finally, we had IntelliSense in code completion. We said, OK, can I actually ask AI questions, right? Instead of going to Reddit and Stack Overflow and copying and pasting, can I actually ask?

Chat became the next thing. Then we said, OK, can we even do multi-file edits across the repo, right? Now we have agents. Now we, instead of just even thinking of pair programming, we now have peer programmer with SWE-agent . That is a complete intelligent system, which is essentially what you're going to do for silicon design, right? In some sense, those are the new applications. Because when I think about silicon design, like as customers of yours inside of Microsoft, we have to be able to do tapeouts, A0s every year with sort of absolute high fidelity. That is not going to happen if we do not have breakthroughs in the tooling that our engineers use. That then is leading to the foundational rework of the data center to all the components in the data center.

That is where, for example, my SmartNIC, my DPU, my AI accelerator all have to be designed together to support the training and inference workloads going forward. I think that is the exciting part. There is a system architecture that is changing. The workload itself is changing. The coupling between those two is what we are all sort of grappling with, quite frankly. It is great to see the innovation that you are bringing to us, we are bringing to you. Both ways, I think we need each other.

Speaker 1

Exactly. I mean, that's why we are so excited about what we're calling re-engineering engineering, because you have to think differently in order to design these complex interconnected systems. Now, with Microsoft, we started the Copilot journey with great successes. As you know, Synopsys has thousands of software developers where they started seeing the amazing benefit of having an assistant. Now we're moving it into the more sophisticated LLMs with what we're calling agent engineers. I know you're very passionate about that and respecting your time. Any thoughts as you're thinking about the future of agents orchestrating multiple agents to solve these challenges?

Speaker 2

I think that that is the phase we are in, right? If you sort of say it started with more things like completions, we then went to chat. Now we are giving agents the task. In some sense, in the first phase, it was more we were asking questions and we were doing the execution. In this next phase, we're going to give instructions and AI will do the execution, if you will. We'll still be in the loop. That, I think, is what is important for us as engineers, whether it's on the silicon side or on the software side, because at the end of the day, the abstraction level goes up, but the understanding of the system still, I think, is going to be very, very important for us to be able to create great engineered outputs. That's, I think, the exciting part.

The other thing that I would say is this reasoning capability. The big change in the last year has been it's not just even having very capable pre-trained models. In fact, in an interesting way, there's lots of pre-trained models that are fairly capable. It's showing that if you have sufficiently large pre-trained models, the trick is really about how do you teach it reasoning for a given task, right? In your case, what does it mean to teach it to reason over silicon design? Something like you and I talked the last time, which is the type of optimization you do between power and performance and area. That's a reasoning task that we have sort of had previous algorithms. The question is, can you teach using RL and other mechanisms a core model that thing?

That, to me, is the place where I think a lot of interesting product capabilities and model capabilities are getting intertwined. That is, I think, the exciting phase we are in.

Speaker 1

Exactly. Now, just closing remarks from your side, you mentioned that software engineers can be as skeptical as hardware engineers, because I want to talk about it later. Any advice you have given the pace in which innovation and technology is moving?

Speaker 2

It's a great question. I think what's happening for us, Sassine, is even when I look at the core workflow inside of Microsoft, in spite of massive technical changes or platform shifts, right? We went from client-only to client-server to the web to cloud and mobile. The core workflow has remained stable, quite frankly. We changed a little here or there. We kind of have fancy things like DevOps today and blah, blah, blah, but nothing really at the core changed. This is the first time I feel the core workflow itself may change in the sense of if you think about it, right? At LinkedIn, just to give you even a feel for what structurally we're doing, is we now have a new role called a full stack builder.

Because if you think about it, we have now put these powerful tools where a designer, a product manager, and a front-end engineer can all come at it as full stack product builders. Why not increase the scope even for these roles? I think one of the interesting things for us, and the same thing is happening between like take one of the things OpenAI, quite frankly, taught us was there is no distance anymore from what we would consider AI science and a workload or an application. That was the magic, right? To me, even thinking of what is science to product to engineering, that is the place where I think, whether it's in your company, our company, or anyone who's in the audience, I think we'll have to fundamentally get down to what is the end outcome?

How do we really achieve that outcome by streamlining our work, work artifact, and workflow to drive that outcome faster and more value to our customers versus status quo?

Speaker 1

You made my job easier for the rest of the keynote, Satya. Thank you so much for the partnership. Thank you for joining us this morning. Thank you.

Speaker 2

Thank you so much, Sassine. It's my pleasure.

Speaker 1

Thank you. As you've heard, the complexity of bringing these multiple disciplines of optimization to achieve the schedule and, more importantly, the differentiation is something that is accelerating at a speed that I have not seen in my last 27, 28 years in the industry. As the old saying goes, necessity is the mother of all invention. Today, the need to deliver on this pervasive intelligence with that increased complexity and pace, and you're going to hear me talk about complexity and pace throughout the discussion, we need to rethink how do we re-engineer the products of the future. Now, I'm talking a lot about intelligent systems. For the next remaining of my presentation, I have really three sections. What is an intelligent system and how do we rethink on how to design these intelligent systems?

We go into silicon and the key technology in silicon to support these intelligent systems. Lastly, our vision and roadmap for AI to change the workflow and how things are being done. What we saw earlier with SpaceX, with the robot, and when you think of an autonomous car or drones, et cetera, these are intelligent systems where there's a massive amount of software and with the AI workload to drive the application. Those are very specific applications that you need a silicon that is customized in order to drive that efficiency across the stack. If you take a closer look at a drone, and here you'll see the complexity of what we're talking about, you start with the workload, which is software and AI that is expected to be autonomous. It must understand, avoid objects, both static or flying. It must communicate with the operator.

The entire system must be built to support this workload. Now, a lot of complex, of course, software and AI models. At the same time, the software must control the mechanical aspect of the drone, the actual motors, the battery. And that's an electrical. You are going from electronics or the software to the actual physical drone and the silicon that is optimized to make sure it's efficient in terms of latency, power efficiency, et cetera. That's the electronic system that is connected to an electrical system. As well as when you start thinking about the physics, the aerodynamics, the type of material you need to use in order to deal with the stress as well as the reliability of that drone as it's operating.

Now, if you're a system engineer and you're thinking about designing this complex drone, you have to be looking at not only the individual engineering domains, but you need to have an understanding of the cross-domain. In order to do it, you have to start thinking about how do I virtualize with high level of fidelity to design that system. The other thing that is important is these systems are not operating in isolation. They're often interconnected, meaning one system or one drone is operating and interacting with another drone. I want to show you a really cool display where about 10,000 drones were operating to do a show. They were all controlled from a single laptop. Even there's more complexity. These drones were not operating in a lab. They were operating in a real-world environment.

That brings an immense complexity, same as when we talk about autonomous car. How will it operate on the road in a real-world environment? That multidisciplinary interconnected system engineering, you have to get it done right the first time. Otherwise, the cost of developing these systems is very challenging for a company to survive if you don't have the right methodology and workflow. Another example of an intelligent system is actually a data center where you have very specified workloads that need different optimization from silicon all the way up to the system in order to drive efficiency. I mentioned earlier the Azure example where we are optimizing some of our technology where they need a massive amount of compute to run, where we're seeing significant improvement in the time and cost due to that optimization that we're providing at the compute infrastructure level.

Just like these examples, these domains are, again, multidisciplinary and bring together different engineering that we need to take into account. This is where digital twin comes in. I know digital twin as an industry, we've been talking about it for a while. It is essential given the complexity on how to simulate in real time and analyze and optimize at that system level. Now, to build an efficient data center, you need to model the workload on the silicon devices that do not yet exist. Today, we have actually customers running their LLMs on our accelerated prototyping platforms. They are co-developing their LLM and the silicon for the target workload that they are designing. Power is a critical component. How do you optimize for power for that specific application and, in this case, a data center? Synopsys actually is the leader in electronics digital twin.

We've been talking about EDT or electronics digital twin for at least three to four years as we started engaging deeper with the complexity of automotive and autonomous driving. The digital twin itself needs to model both the electronics and the surrounding environment. Now, in the case of automotive, we have to partner with the ecosystem that they have other part of the modeling that needs to come in with the chip virtualization and electronic system. Think about what other way you can validate an autonomous car without having this digital twin capability. Actually, if you take a look at the digital twin in action here, this is where Synopsys was able to virtualize and model the control system and the zonal and compute ECU to communicate with each other. That model is executed with our technology called Silver and Virtualizer.

In this particular case, the example you're seeing, that was a partnership with IPG CarMaker in order to bring in the vehicle dynamics and the surrounding physical world. What we provided was the electronics virtualization. IPG CarMaker brought in the surrounding physical world into this example. During the execution, the software development, the testing team can observe the behavior of that silicon into the environment for the specific workload they are building. That does not only apply to cars. That's, again, back to the intelligent systems, drones, data centers, et cetera. They all benefit from that virtualization that I'm talking about. If we bring it closer to silicon, a 3D IC or advanced package is a sophisticated, complex system where you need to take into account not only the electronic design. You can argue the electronic design in this case is understood.

The moment you start stacking these chiplets into this advanced package, you're dealing with a whole other slew of challenges, be it thermal, mechanical, fluid structure. How do you think of that system when you're designing it and not solving the problems when it's too late? About eight years ago, when we decided to collaborate with Ansys, we could envision that the need given where Moore's law is at and the ability to go beyond the radical size, you're going to be limited by that physical effect. The stacking of chiplets and bringing multi-die into advanced package becomes essential as part of the solution we needed to provide to customers. Today, we're proud that we are able to enable our customers, Ansys Synopsys customers, to deliver to these complex, advanced packaging and multi-die systems.

Now, I want to double-click into right now the silicon side of the key factors to continue on momentum on innovation. It goes back to the same thing. What our silicon folks are dealing with is complexity and the pace in which they need to design these hundreds of billions of devices. Actually, customers are already talking about trillions of transistors bringing them together in one package. While the schedule, there's a race to go from an 18-month tape out to 16 to 12 months or below to deliver this customized silicon for these intelligent systems. Now, how do you deal with that? As the complexity from technology on a single die, we're talking about a GAA. We're talking about an angstrom in order to design that silicon. Then you bring it together into an advanced package.

I want to walk through six key technology factors that I want us to think through in order to deliver to these advanced silicon. First, I'll start with the advanced packaging. 3D IC is the only way you can scale to the hundreds of billions and to the trillions because there's no way you can put these things in a monolithical fashion. Now, the moment you start scaling to that level of complexity, you can only achieve the performance or power by being efficient at the interconnect level and how to architect that multi-die system efficiently. In most cases, dies may be coming from different process technology and different foundries. How do you verify and validate and architect in order to deliver to this advanced package?

Now, interfaces become essential, interface IP, where the only other alternative if you have a monolithic die sitting on a PCB where you still have and need an IP to connect those multiple monolithic chips together, which leads me to the second opportunity I want to say, which is the advancement in IP. The first one is the 3D IC to architect that system and how to bring the right choices and optionality you need. The next challenge is the IP to interconnect this advanced system. We are fortunate that we are in a leadership position in IP, meaning that we work with every customer that they're thinking about either a monolithic or multi-die together. One of the things that we've observed actually over the last three to five years is the pace in which those standards are evolving.

Where we used to design a standard and it used to be valid and viable for our customers for four, five years, now that time has shrunk significantly. Actually, one of the best examples, if you think about 2018, we were talking about 2 gigabit per second interconnect. That has been growing at about 18 months, doubling at about 18 months, where in 2024, we reached 32 gigabit per second with an expectation to be at 64 this year. That pace in which this complexity of these interfaces is going is truly exponential. The second layer with IP and when you think of advanced package is HBM. HBM is another key driver in order to bring together these advanced systems.

As you look at the HBM, the DDR, the PCI Express, the Ethernet, all these interfaces, the evolution has never been as fast as what we're seeing right now. It's not by accident that you're seeing these hockey sticks are going exponential. AI is the driver. There's an application that is driving these interfaces to be at that pace and acceleration as well as the complexity. Now, going into the next layer of complexity as we look at the advanced systems is the actual advanced nodes. I know when people make comments that Moore's law is dead, then you see customers that they're designing the most advanced AI silicon still pushing not only foundry from a capacity point of view, but from a technology point of view in order to keep up with the angstrom march.

The reason for it, it does deliver performance and power efficiency that is needed. It wasn't too long ago where we talked about 7 nanometer as an advanced technology. Now, most of these advanced AI chips are below the 7 nanometer and what we called the March to angstrom. We're fortunate, Synopsys is very fortunate actually, that we work with very early step of the process technology development with technology we have like TCAD or OPC, where we are with the foundry in the very early R&D stage at the device and the process modeling and simulation level. The complexity and the art of what we're doing with the laws of physics is truly an engineering feat. It's unbelievable what's happening. That March is continuing. Now, with the leading fabs to develop and productize their next node, it's not only an EDA investment.

It's an EDA and IP where you have to make sure that that IP is designed not only on one foundry, on multiple foundries to give our customers that optionality when you're thinking about multiple dies sitting in an advanced package. They may come from different foundries in order to bring them together. This is where our investment in IP and the roadmap to keep up with not only the interface acceleration, but interface acceleration and the designing of that IP across multiple foundries. We talked about 3D IC, IP, advanced node, massive complexity. Obviously the next step is how do you verify that complex system? When you have quadrillions of cycles that you need to validate, and here in our world, we cannot have it's good enough unless tape out because the time and the cost is so significant that verification needs to evolve.

Actually, Synopsys has been for at least 10 years talking about the verification continuum where you start with continued acceleration with VCS and how do we evolve at a different level of abstraction, different level of speed of capacity from VCS all the way up to HAPS for virtualization. You can go further up to Virtualizer to virtualize. That continuum is essential in order to drive that innovation. Actually, a couple of weeks ago, we announced our HAPS- 200 and ZeBu- 200 platform where many of you were in the launch. We had both Arm and NVIDIA talking about how it's helping them improve their verification efficiency and cycle of these complex systems that they're building. Now, of course, AI has opened the door for a different way to deal with that verification complexity.

In the car example I showed earlier where we had a virtualization of many parts of the silicon before the RTL has even been written. As you get to the maturity of the RTL is ready, the way you go to VCS, to ZeBu, to HAPS, and how do you bring that continuum together in order to validate this complex SOC? Now, with advanced verification and IP, we're able to shift left the design cycle, which is, of course, essential to deal with that schedule shrinking. Now, we need to stretch the verification cycle not only from the pre-silicon to post-silicon. How do we bring it all the way to the infield and ensuring that when the end product is sitting in a car or a drone and is operating in real life with the real workload that is reliable?

If there's a failure, what's causing the failure as it's operating in the system? We call this SLM or Silicon Lifecycle Management. With SLM, the initial thinking was for infield health monitoring. How do you insert monitor sensors at the chip? You monitor the health of the end SOC as it's sitting in the field and you have workload running on it. With 3D IC, it brought a whole new opportunity and element. In talking to some of the leading packaging manufacturing companies out there, one of their big fears with going broad with 3D IC is you're putting multiple dies and chiplets into an advanced package. That package is running in a car or in a data center.

Let's assume one of those chiplets is overheating when there's a specific workload running and is causing the failure or the warpage, the crack of the die that's sitting above it. How do you monitor these things when the workload is running without having that capability? SLM is not only about taking these monitor sensors and watching how the SOC is running in the field from a monolithic standpoint, but with 3D IC, it will become essential to have that capability early in the process. Now, the last of the six factors I talked about is EDA. How do we bring the advanced EDA to have a convergent flow and be able to deal with that angstrom march as well as the rest of the elements that I just described?

Now, from systems architecture, digital analog design flows to sign off to test to manufacturing, how do you enable all these tools to come together in a hyper-convergent way to make sure you have a predictable and convergent flow and an outcome to reduce the number of iterations and discovering issues later in the flow? Of course, we build AI everywhere and every opportunity that we have in order to accelerate the task. We were the pioneer with bringing reinforcement learning starting in 2017 in order to tame that complexity in every part of the flow that is needed and necessary. Those are the six technology areas that are needed to deliver on state-of-the-art silicon. As I said, we cannot do it alone.

Many deep collaborations with foundries, with OSAT, with IP partners in order to really deliver what arguably is the most complex engineering task known for humankind. Now, I want to switch to our AI journey. Actually, I want to start with Satya's point that sometimes engineers can be skeptical. I urge you to put that skepticism aside because you're not doing yourself or definitely not your company or your team a favor if you're not adopting rapidly the technology that is needed in order to change the workflow given the complexity that we're talking about. In many discussions with customers, what we've delivered so far, and I'll walk through what we have today and customers are using with Synopsys.ai, they see a tremendous value. What they say at the same time, it has not changed my workflow, meaning it helped me deliver on the complexity.

We call it taming that complexity. We appreciate it. The pressure to do something different in order to deal with that exponential of complexity, we have to think different. This is where we believe AI is going to change the workflow. Let me walk you through the journey of where we are and where do we see the world going with AI. First with Synopsys.ai, this is where I said we pioneered with reinforcement learning in 2017, bringing it into the physical implementation space, bringing DSO.ai to work collaboratively with Fusion Compiler in order to optimize the many inputs and the large space of optimization and deliver to the best PPA in the shortest time possible. We started talking about our data continuum with the data analytics with Design.da, Fab.da, Silicon.da.

How do you stitch together insights on what happens at the next step of the flow? The results are amazing. I remember around the 2018 timeframe, the team, the R&D team came and were very excited with the prototype they have and running on a number of the customer designs with fantastic results and going to customers trying to convince them to use the technology. Partially, there was skepticism, but the other part was there was a confusion. How do I use this technology in my workflow? My engineers are structured a certain way that they've optimized for two, three decades. How do I evolve? Now, I hope none of you are doubting that you need to use the Synopsys.ai technology in order to boost the outcome and the productivity that you need to deal with that complexity. Now, next, I want to move to generative AI.

In generative AI, think about it today the way we're describing it. You have a copilot and you have assistive and creative. Assistive is actually what we talked about in terms of copilot technology that we started with Microsoft, where you have a workflow assistant, knowledge assistant, a debug assistant that you can ramp up a junior engineer in a much faster way, as well as an expert engineer. They can interface with our product in a more modernized, effective, efficient way. You have the creative element, which is a number of examples here actually where we have early customer engagement from RTL generation, test bench generation, test assertions, where you can have a copilot that helps you create part of your RTL test bench documentation, test assertion.

I'll show in a moment the journey of maturity and where we feel the technology is going to be six months, nine months from now, two years from now, and how it will evolve. Same thing. The results that our customers are seeing from both the assistive and the creative is actually fairly impressive. It's not surprising because when you're modernizing the way you're doing the work, you are getting to some truly impressive results compared to a human engineer working the same approach or method as it was done before. Now, in the creative solution, this is where the productivity booster can be fairly significant, where you can go from days to minutes. I want to remind us that in our case, we cannot have models that they hallucinate. You cannot have models that say, "That was good enough.

Let's go." We are very deliberate by when and how do we engage with our customers to make sure that the maturity of what we are offering is actually acceptable without putting any part of your workflow at risk. It is an important point to point out. Now, as AI continues to evolve, so will the workflow. I often get asked the question from primarily our investor stakeholders on when do we see a change in the EDA as a market by leveraging AI. I do not believe that will be the case unless your workflow would change, meaning you can do certain things very differently in order for you, the customer, to deliver on your product roadmap in a faster, more effective, more efficient way.

Now, with the agentic AI era, I would like to introduce the concept of agent engineers, where agent engineers will collaborate with the human engineer in order to tame that complexity and change that workflow. This is where we have a deep collaboration with Microsoft, NVIDIA, and others on how to build these agents specifically for the semiconductor market. Within the semiconductor market, how do you have specialized agents for part of your workflow? As you look at this chart, think of it as our roadmap and vision of how do we go from the Synopsys.ai, the data analytics, and then as we're looking at agent engineers and the agents of the future. What you see on the x-axis is the evolution from Copilot to autopilot. From the bottom up or the y-axis is how do you build that capability.

That's a cumulative capability you have to start building and layer on top of each other from generative to agentic. First, you start with assisting. This is where we put big energy and effort over the last couple of years to bring the Copilot capability into each one of our products. These LLMs are trained and specialized LLMs for each one of our products. The first step is assisting. The next step, you go into acting where you have agents that are specialized for a specific part of the flow. As I mentioned, RTL generation, you're going to have an RTL generation agent, test bench generation agent, test assertion, etc. These action agents, of course, will improve over time because they will be learning and improving based on the design and the environment that you're running, which will be, of course, different for each customer.

The next level is how do you bring the multi-agents together and orchestrate these tasks. You go into dynamic adaptive learning where you optimize based on your own workflow. The first few steps are existing workflow, but you're building agents to operate within an existing workflow. The workflow will start changing as you move into the orchestrating and planning steps of the flow with the ambition to get to the model or the agent framework will be able to take autonomous acting and decision on the entire chip, part of a chip as the technology matures and evolves. As you look at this, I would like to draw a parallel to autonomous driving. I'm sure all of you are very familiar with the L1 through L5 for autonomous car, where L1 through L2-ish, L3, you have a human monitoring the road to a system monitoring the road.

Let me walk you through the similar levels and what is it that is available today and how do we envision from an L1 to L5 for the agent engineers. Think of L1 as the copilot of today, which we give it the ability to assist engineers to create files using LLMs. The moment you move to L2, you start acting on specific areas of the workflow. For example, you can ask the agent to fix a lint error, to fix a DRC violation. They are empowered to act on a very specific part of the workflow with a human engineer collaborating with these agent engineers. As you move into L3, that's where the multi-agent orchestration becomes very important. How do you orchestrate different agent types? You start creating an opportunity to solve the problem across the domains.

For example, to fix a signal integrity violations, you need multi-agent orchestration in order to fix a signal integrity type of a challenge or close timing for me. It takes different agents in order to achieve that ask. L4, you start doing the planning and adaptive learning, which will allow the agentic solution to assess the quality of the results, refine the flow, and this is where the workflow will start adjusting and changing to improve its own workflow, not the same workflow that we started with at the L1 or in L2. In an L5, this is where we feel the term autopilot is appropriate, which will add a high level of decision-making capability. This is where the entire multi-agent system has the capability to fully and autonomously reason a plan and take actions to achieve that higher-level outcome.

Now, some of this, you may be wondering, and I know we'll have a number of sessions today, tomorrow. Where are we? This, on an L1 and L2 level, we have a number of engagements with a number of customers. And that technology, of course, will continue on maturing and building. Back to the ADAS example. It's not like when you reach an L2 or you're designing on an L3, you stop touching an L2. Those will go and continue on going and evolving as you get to the next phase and next level of maturity. Now, back to there are skeptics. I think we have at least a cheerleader where Jensen would like to rent millions of Synopsys agents in order to achieve and tame that complexity of chip design of the future.

Same discussion as Satya mentioned, that not only for chip design, for every industry, the workflow will change. There will be collaboration between a human engineer with an agent engineer for various industries and product applications. We are, of course, very excited about this opportunity and will look forward to engaging with you as we evolve and build the roadmap from a copilot to an autopilot. Now, as I wrap up, there are not only the optimization I described at the system level, silicon level, and AI level. There are other opportunities that we need to consider. When you see those technology horizons, there is the workflow, which is the top layer that we just talked about, engines or solvers, and then the compute. At the workflow, that is what we just described, the L1 through L5 with agent engineers.

You move into within agent engineers, continue on evolving it to go from a subblock to a bigger part of the SoC to the entire chip. Below the workflow sits the actual engines. Imagine you have an agent that is a timing agent. They will need to work not only at the PrimeTime shell level. Is there an opportunity to go and optimize at the engine and solver level? The answer is yes. How do we continue on evolving the engines and the solvers, not only at the electronics level, as we expand the portfolio at the multiple levels from electronics to electric to mechanical, etc.? This is where the digital twin becomes even more practical and scalable and accurate to use. Oops. At the compute level, we've done, I believe, a very good job as an industry with every opportunity. I moved too fast.

Can you go back one slide, please? That we did a very good job as an industry to optimize from a CPU to a GPU, multiple flavors of the CPU, multiple flavors of the GPU. It's not a simple port, port from this CPU to that CPU. It's an optimization opportunity where we see 20%, 30%, 40% improvement in compute from a CPU to the next CPU. As we optimize on a GPU, you've seen most likely in the last few days and definitely in the last few years when we talk about 10x, 15x, 20x improvement. As you look ahead, possibly with the qubit, with the QPU, what are the opportunities to continue on optimizing at that bottom layer, which is the compute, then the engine, then the workflow?

Now, as we're wrapping up, I talked about a couple of concepts, the need to re-engineer engineering and the agent engineers and collaborating with the human engineer to change the workflow in order to deal with the complexity and the pace of what we're building. I'm very thankful and passionate and appreciative of the Synopsys team that is so committed to our customers and to sustained innovation that they come in every morning with enthusiasm to drive it. What makes us even more excited and happy is when we see our customers using our innovation and technology to deliver your product and truly changing the way humankind will live. We're at the center of it. Our mission is empowering innovators to drive human advancement. With that, big, big thank you and enjoy the show, and I look forward to interacting with you. Thank you.

Powered by