Good afternoon, everyone. My name's Blair Abernethy. I'm one of the software analysts here at Rosenblatt Securities, and I'm thrilled to have Synopsys joining us today. I first need to read a quick safe harbor agreement language, and then we'll get into our Q&A session here. Today's discussion may forward statements related to Synopsys' current outlook, expectations, and beliefs, which are subject to risks and uncertainties that could cause actual results to differ. Please refer to Synopsys' most recent SEC filings for a discussion of risk factors that may materially affect these statements. So, with that, joining us today from Synopsys is Stelios Diamantidis, who is the Executive Director of the Center for generative AI at Synopsys.
Stelios has been with the company for about seven years, is a fellow electrical engineer, as I am, and looking forward to our discussion around AI. Also, Director of Investor Relations, Phil Lee, is with us to provide additional support. So gentlemen, thank you and welcome.
Thanks for having us, Blair. Appreciate it.
Just another footnote, if anyone in the audience has a question, they can type it into the question block at the bottom of their page, and it will show up here on my screen, and I'll be happy to enter that into the conversation. So, just to kick it off, here, Stelios, I'm sure you're really busy with a title like Center for Generative AI at a company the size of Synopsys. Tell us a little bit about what you're doing these days at Synopsys.
Oh, I'd be happy to. So as many of you may know, I'll just very quickly go over what Synopsys does first, 'cause I think it'll, it'll give us the right context. So we deliver silicon systems design solutions, and those solutions maximize our customers' R&D capability and productivity. You know, our motto here is, "It's our technology, but it's your innovation," and so we wanna make sure that innovation happens as, as quickly as possible. So as part of doing this, we have pioneered a lot of solutions, a lot of technologies over the last almost 40 years that Synopsys has been in existence. And we've been part of many, many, many design journeys, as we like to call them, that our customers have taken through those innovations. So the Center for Generative AI is actually something very natural to us.
It is us recognizing that there's a new technology that has huge potential, huge potential for us to deliver, new solutions, improve the access to our existing solutions, and even benefit internally when it comes to the way that we design software, or even the way that we drive our operations, and that is really the scope of this center here that we've been running for the past 18 months or so.
Great. Great. If we, if we look back, maybe start with a little bit before the onslaught of Generative AI into the, into the mainstream, talk about a little bit about what Synopsys has been doing in the last four or five years. I think your first product, really AI-enhanced product, came out about two years ago, but just to maybe give us a sense of, of what, what Synopsys has been delivering to, to customers, utilizing AI.
Oh, absolutely. So when you talk about AI, it's always good to define what, what is it, right? And so, for us, so for us to call something AI is more of a qualitative definition. It means that you've achieved something, some software capability that has reached sort of human level of complexity when it comes to, decision-making or effectiveness. So it's really is, a sort of soft definition, but it's a hard, hard definition. It's very hard to, to get to that level of, of, accomplishment with a piece of software. So, when it comes to a broader context of machine learning, you know, clearly Synopsys has been working with machine learning-based solutions for many, many years, mostly models embedded in our tools that can help us...
sort of help tools become more acquainted with the design environment, learn from interactions, and become faster and more convergent, really as an under-the-hood capability, but that's not what we call AI. I would say that for us, our AI journey officially probably started in 2017, when we first started having line of sight in new technologies, or new technologies that had the potential to allow us to solve problems that simply we haven't been able to solve before, in chip design. And what started as a very ambitious research program even surprised us that it became an actual product in 2020, so 2.5 years or so since we started dreaming.
And that product, of course, is Design Space Optimization AI or DSO.a.i. It's a product that helps solve significantly difficult problems in the area of digital design, primarily physical design, so floor planning, placement, clocking, and routing of chips, one of the biggest problem spaces in chip design. And so in 2020, we went to market with our first solution, and it was really quite amazing to see this journey of AI-enabled solutions sort of take shape. Not only did we grow DSO.a.i in the digital space to the tune of, I believe now it's north of 400 commercial tapeouts that have been achieved.
But we've also been able to take the technology branch and apply it to many other complex problems in chip design, like, for example, the implementation and selection of tests for verification, the semiconductor test process, which is a very expensive step in the chip design, overall chip design, manufacturing, and even analog level problems that have been deemed to be intractable before. And we're still seeing this great applicability of the space optimization technology, which is a reinforcement learning-based technology, way before ChatGPT came into play.
Some of the early results from DSO in terms of, you know, PPA improvements are quite impressive, like material improvements on customers' designs. What are you seeing in some of the other newer areas? Do you have enough feedback yet from, you know, the verification test or analog kind of AI-enhanced products?
Yeah. And I think that's probably one of the most exciting aspects of this technology, is while the problem space has changed dramatically, even value is captured differently. So as you said, in digital design, value is PPA. Well, in verification, value is time to regress or time to coverage, on coverage completion, right? And then in test, it is the size of really the time that you spent on the tester, and in analog, is the time that it takes you to, for example, migrate an analog circuit from one process to another process. So wildly different definitions of the problem, and yet the value of both, both in terms of the applicability of the underlying technology and the magnitude, are very similar.
So, for example, in verification, we have seen 5-10x faster closure in many, many design cases, which changes the way that you think about verification overall, right? Generally, verification is a problem where an engineer, generally with a lot of experience, tries to figure out exactly what parts of an overall functionality of a chip to test in the time they have available. So now they have a lot more time available. So they can do a lot of different things, and they can change the way that they design, actually, fundamentally. Same thing for, let's take, for example, test.
If you have the ability to reduce the pattern counts that directly correlate to the amount of time you're gonna be spending on a tester, we have seen easily 20%-30% pattern count reductions for our test space optimization technology, and that has led to amazing, of course, changes in the way that people plan products, and so on and so forth. I think the impact of the solution, the magnitude of the impact has been very substantial, and I would say at the same level of impressive results as we originally saw with our digital design solution.
These products are all, they're all add-on products, if you will, to an existing tool chain that a customer might be using, right? So your Synopsys is actually getting additional revenue for these products. You're not charging on a seat basis, though, right?
The products are licensed independently, but they're used in conjunction with the underlying tools, and that is actually a very important aspect. When you look at problems outside of EDA, where, for example, you had reinforcement learning-based systems learn how to play chess or go, or even within the chip design space, you've seen different kinds of solutions try to create circuits from first principles. Very exciting technologies, and we really love to see that kind of innovation, but they all run into some very specific problems when it comes to scaling. Because getting one great result is actually impressive, but being able to consistently get great results for all chips and all verticals and all target processes is actually an extremely difficult thing to do.
So our AI doesn't learn how to build circuits, if you'd like, with a transistor and a whole bunch of wire. It learns how to build circuits using Synopsys tools. So it leverages our 40 years of experience solving known problems. We don't recreate the wheel here. And we focus all our energy and our investment in learning how to navigate these tools into the more intractable problem spaces, where no known mathematical solutions actually exist. And so they're companions to our product suites, and they grow together. In fact, it's practically impossible to talk about one without the other, and hence, we see them as addressing one space, the digital design space, the verification space, the analog space.
Okay. Okay, that makes sense. Makes sense. And how has the take-up been of these new products by your customers? Are they... Do they take a long time to pilot with them, or are they putting into production fairly quickly? What's kind of been your experience in the last couple of years?
You know, even since 2020, when, when we first went to market with a brand-new solution, we've seen tremendously quick uptake in some of these products. Now, tremendously fast in our world doesn't mean hours or days, still means weeks or months. Because at the end of the day, you are putting a new capability into production, and you are putting a lot of money behind the chip program for which you're deploying this capability. So it does take time, but in my experience, my years in EDA, I haven't seen a technology like DSO, for example, going from zero to 400 tapeouts in, what, 3 years or so. That means that people put those into test environments, then put them into pilot environments, then do test chips, then finally deploy them to production.
So to reach 400+ production tapeouts, it means that you've actually applied this to a lot of circuits. And I'll share one more thought from even our early days when we started talking to customers about the DSO.ai. As you can imagine, when you walk into engineer's den, I will call it, basically a leading semiconductor design company, where you're introducing a new capability, and you use the term AI, immediately people, of course, want to see it working. And they don't wanna see it working on some easy examples that some of the more junior designers are taking on. They want to see it working in some of the hard-to-close designs, some of the fastest, most energy-efficient designs they have available.
And so that, for us, was really the sort of like the measuring stick, if you like, for the solution. And being able to walk in and demonstrate results at that level of capability was quite incredible. So yes, fast, but certainly in our space, even fast takes a bit of time.
Yeah. Yeah, interesting. Let's maybe shift the conversation a little over more onto the last two years or so with generative AI, and maybe explain a little bit about what you guys are looking at. And I think both, I'll let you define it the way you want, but you know, it's certainly... I look at it and I go, "This is opportunities to improve productivity at the customer site," change the tools, but also, there's internal things you can do at Synopsys that can drive productivity and margin improvement using this technology. So maybe lay out for me how you guys are approaching it.
Oh, yeah, absolutely. And you're absolutely right, Blair, that's the way to think about it. There are opportunities both internally and externally, and maybe for the first time, we can serve them both with very similar underlying technology, right? So generative AI gives us a new tool belt, if you like, a new set of capabilities that we can apply to difficult engineering problems, and so it's very much ideal for some of the challenges that our customers face with designing chips. And I group them in two categories very broadly, right? One category is making it easier, faster to become more efficient with the existing tool set, so the existing products. Which means that we can now come up with capabilities like our recently announced Synopsys.ai Copilot, which is a conversational front end to tools, right?
So now engineers with maybe fewer years of experience can, right there within the tool environment, ask questions, identify next steps, compare notes with the experiences of literally hundreds or thousands of other designers that have been part of our design journeys over the last 35 years. And that is extremely powerful, as I said, particularly as we are really gasping for talent, worldwide when it comes to chip designers. In addition, we can also improve the way that we support our customers by helping them get answer to questions faster, and most importantly, within context. So we can answer things, not generally as in, what would you do for a design that exhibits a behavior like this? But more specifically, what would you do for this design that you're working on in the current context? Which is extremely more powerful.
On the other hand, now there are new things that we can do with our tools. So, you know, on one hand, we had accelerating the way that people interact with our existing products. Now, we have opportunity to solve new problems that we weren't able to solve before and leverage the power of generative AI, which is generally what? Content generation and the conversational interface. So not only can you find out how to do things, but you can also create agents that actually do things for you. Can you read this spec and, for example, create some interfaces for me so I can review? Now, that task that I just described in a few words and makes a lot of sense to everybody is weeks of work of an expert designer, right?
And we can just describe it very quickly, succinctly, and get going, and then we can come back and revise. So great opportunities for external users, but equally great opportunities internally when you think about the things, again, that generative AI is very good at. So the data is in. There are. There it is clearly a huge productivity boost for software engineering. So a lot of folks outside of EDA, outside of design automation or IP, are seeing great value in using things like GitHub Copilot to create software. Well, Synopsys has a lot of people who create software, software for our tools, and also designs for our IP products. Accelerating those teams means that we can benefit internally from a lot of the benefits of generative AI when it comes to our engineers on productivity.
But equally important, again, internally, are our execution machine, right? And I'm talking about our sales force, I'm talking about our marketing organization, our finance department. Everybody who, in their regular course of business, create content, perform research, and can now upgrade their everyday operations through the power of conversation, through generative AI. And so that's a fourth area overall, if you'd like, of applicability of generative AI that we have explored successfully, and we're now in the process of leveraging, internally as well.
How long is that road for starting to roll these tools out internally and you start to see sort of impact from it?
... We are already in pilot programs for this technology, so it is, there's a pipeline of capabilities, there is an engineering roadmap, there is a deployment roadmap. We're not quite at the point yet that we can broadly deploy them, but we are at the point where we have exposed them to several hundred users already across all these domains. We're in the process of, you know, doing all our good hygiene, right? Turning great technology and 10 great examples into, you know, full on, all the time, great productivity. Zero hallucinations is a journey, and we're well along this journey. I think we are actually comparing to what we see in the industry in a pretty good spot for leveraging these technologies.
Fantastic. Fantastic. You know, one of the other areas that's really important for Synopsys is on the IP side of the business. Maybe you could just describe a little bit for us what you're seeing there or what's happening vis-a-vis AI technologies in and around impacting your IP business.
Oh, absolutely. Our IP business has really grown tremendously over recent years, I believe, and Phil can keep me honest here. I believe it's now north of 25% of our revenue as a company, if I remember correctly. By volume, we ship a lot of IP. In fact, I think we're probably the highest by volume provider of IP capabilities out there in the industry across really all verticals. It's a very exciting space. We also have a lot of great designers that work in IP. Particularly when it comes to analog IP, analog IP is really the very hard stuff that makes your PCI Express work when you come down to electrical level interactions between transmitters and receivers. So we have tremendous brain trust here, and we also have a wealth of technology.
So there are many opportunities to amplify that with AI. First and foremost, the concept that we can now, particularly with generative AI, take this technology and amplify the throughput of our own teams. At the end of the day, we're no different from our customers when it comes to EDA productivity. So, you know, engineers here can leverage our design space optimization technologies, designers here can leverage our upcoming generative AI capabilities to accelerate their ramp-up time, make their teams more efficient. So there's a tremendous opportunity for us to become more both effective and efficient in the way that we design IP.
Now, at the same time, we also since we are providers of the building blocks of the AI revolution in our IP business, and I'm talking about memory interfaces. I don't think it's news to anyone that, like, AI likes a lot of memory. And also data movers. AI also likes data movers like PCI Express 7, which we announced just this week. And so there's opportunity, I think, for growth in the AI vertical market overall here with our IP products, both in data centers, where again, we're looking at memory and data movers. But also in the edge, where you're looking at capability to process deep neural networks and small transformers in situ within smart devices themselves. So really across the board, all the go-to-market axes of IP get served here, right?
We have a higher workload, more workloads that are specific to environments that require specialized computation, and we can supply that. We need more memory, we need more data movers, we can supply that. And then AI software and AI-driven design tools allow us to supercharge the design machine, so we're certainly looking to take advantage of that as well.
Maybe on the customer side for the IP, who is it that's pulling this down from you? Is it the more established players, the larger companies, or is it the startups? Who's drawing down this AI-related IP?
I think I'm gonna defer to Phil for that one. If you have more recent information, Phil, please feel free to chime in.
Yeah, Blair, it's really all of the above, and it's for the reasons that Stelios mentioned. I think in addition to some of the AI capabilities that we're enabling within the team, there's also several drivers that's driving that business. And overall, we expect our design IP revenue to grow in the mid-teens. A few of those drivers are in addition to the standards accelerating. I think if you look at the standards a few years ago, it, a new one would come out every 3 or 4 years. That time has really compressed, and a new standard is coming out every 18-24 months now. The second thing that our IP business really benefits from is around the idea of supply chain diversification.
A lot of our customers are looking to manufacture, not only at one foundry, but at multiple foundries, just to ensure they have a supply of the product that they can make. And the way we sell our IP is our IP titles are unique to each process node and each foundry. So, a different title. You'll need a different title at TSMC than you would at Intel, than you would at Samsung. And then the third factor that's really benefiting our IP business is you still have a lot of our traditional semiconductor customers who have in-house interface teams that are designing some of these standard blocks.
In a lot of those cases, is that the best use of their resources in terms of where they allocate their engineering talent, or does it make more sense to outsource that to a third party, like Synopsys? A lot of times it is. So we're benefiting from that trend of increased outsourcing as well.
Do you think that the productivity improvements that can come from applying some of the more advanced tools, can that help with your, with Synopsys' margins in the IP business?
I think definitely, and I think it speaks to the reasons that Stelios mentioned, right? So, our customers are benefiting from this, and given that our IP team is a team of chip designers, really capable chip designers, they can use a similar type of improvement in their design process as they work through the blocks and tiles that we build out for them, for our customers.
Fantastic.
Yeah, absolutely.
I guess, you know, if I look at, maybe, take the conversation up a little bit and just talk about the competitive landscape a little bit. We've been talking about your technology and what you guys have been doing in the last couple of years. How is that positioning you competitively, and how do you see that landscape unfolding over the next couple of years?
No, absolutely, I can speak to that as well. So, you know, I was very nervous for the first year and a half when we announced DSO.ai, and we watched the months go by, and we didn't see any competitive announcements, 'cause I was beginning to get worried that maybe this is not going to go down the path that I was envisioning. And I was almost relieved when I saw one of our competitors announce a solution that sounded very similar. And so I think this is natural course for our business. What's very, very important, and that's always the leader's advantage, is to continue to be first to coming up with the technology solutions, right? That's your advantage, is to know where you're gonna go next.
I really am excited that we have a very strong roadmap as to where we wanna go next. Now we have actually built also an incredible track record as to where we've been already, right? If you look, and in just three short years, from the first AI application to now, what? Five or six AI applications, really up and down the technology stack, to the first introduction of a generative AI capability in November, less than a year since ChatGPT came out, and set sort of the standard for this stuff. The technology announcements and partnerships we have established, the collaborations with the likes of Microsoft and NVIDIA that we've talked about already.
We're going a million miles an hour, and I very much expect competition will also take those steps and will identify those paths and will come up with competitive offerings, and that's the natural course of business. But I think we've established some good daylight, and we will, you know, do everything we need to do to keep that daylight and make sure that we bring these technology solutions to our customers first. And that would come in all flavors of, you know, incrementally improving capabilities and disrupting the market as well.
Interesting. Interesting, and maybe a shift a little bit over to the extent that you can talk about Ansys, the acquisition that's pending for you guys. Stelios, you know, or Phil, you know, how are you thinking about their capabilities in this area, and how do you work your products together with their products?
Oh, absolutely. So I'm sure a lot of the details will take some time to figure out. However, I can tell you how I see it as an opportunity, right? So we talked about two significant businesses that Synopsys runs. Our design automation business, colloquially our tools, that help engineers create new circuits, and then our IP business of existing design circuits that can be dropped into any new chip. Now, there's a third business, and that is our systems business, and that is really thinking about the hardware-software interaction, thinking about designing entire systems of chips, potentially multiple chips, really thinking about the software bridge. And that's where you start getting a lot of the real-world interactions that are very, very interesting.
For example, if you're designing electronics for an autonomous vehicle, you're getting a lot of test traffic, no pun intended, that comes actually from real traffic conditions, right? So applying that to the electronics world creates this opportunity to create digital twins. And digital twins for electronics are extremely powerful because now our customers can now test the systems and the software they intend to run on the systems to the tune of, you know, 1 or 2 or 3 or 5 million lines of code before they commit to an architecture, which is a huge advantage. For me, I get really excited thinking about taking that principle and applying it to every physical interaction out there.
And I think with Ansys, we're gonna have, this expanded portfolio that allows us to think about electronics, mechanical, environmental, like, all at the same time, and make this applicability of, digital twin technology really global, really, you know, apply to everything. And then if I think about the things that we've done for electronics with our design space optimization technology, verification optimization, now generative AI, well, they sound like a perfect application for this world of future digital twins that can model the world, right? Because now our customers can explore, predict, optimize, and even generate content for extremely complex systems, right? So I think of it at that level of capability and, really, the, it's a limitless set of opportunity, I think.
I also draw inspiration from Jensen Huang's recent sort of view of the space of, you know, we're gonna go out, and we're gonna look at the world, and we're gonna model it, and those models are gonna come from Synopsys. And so, I see some really great opportunities there for the company, as the ecosystem is taking shape.
Fantastic. Fantastic. And I guess the, the, I mean, the other thing that's been happening, it, it, partially what we're talking about here is, is, you know, moving from just individual chips into systems of chips, right? And that's introducing whole new levels of complexity. And I guess that's, that's partially what you're trying to address here with, with, with these AI tools.
Yes, absolutely. The fundamentals of how we build chips have been changing. The drivers have been there for a while, complexity, and the ability to also address supply chain issues. And all of those are coming together with more 3D structures. You know, we're still in the sort of, I would say, early innings when it comes to 3D, and we're seeing a lot of commitment into that space. Early innings, as in, you know, we're actually doing chiplet-based design, but we haven't really gone into deep 3D engineering. I think that's yet to come. But it's opening up... At the same time, where is the system giving us big problems that we can address computationally?
The physics now, through these 3D structures, allow us to produce ever more complex chips and systems of chips that can help us go out and deliver the actual computational horsepower in packages that were simply unfathomable just a few years ago. And in environments that can be very noisy, can get hot or have high temperature gradients, where environments where you're operating from unreliable power sources, because you're mobile or because you're going through environments that put stress on your power supplies. We can now go out and do all this and deliver the systems, and that's creating a tremendous, I think, vector for growth.
Obviously, these physics issues are something you've already—that you've been addressing with Ansys for a number of years, with their Ansys product, right?
Correct. Absolutely. With things like electromigration, IR drop, we've been able to have a very successful partnership over the years and show that, we can deliver incredible solutions together.
If we just switch back to generative AI for a moment, because it's part of what you're looking at and researching at the center. How are you looking at... Are you looking into more design automation using AI? That is, you know, more generative opportunities. Actually, well, we're generating code for programmers now, and they're taking it, and it's giving them a, you know, 15%-25% lift sometimes in productivity. Are you looking at similar opportunities in electronic circuit design?
Yes, I think the opportunities are there. Now, the technology today, really when it comes to generative AI, has been around the use of language. And so we have these machines, these large transformer machines that can essentially help us predict the next thing in a series of things. I'll just put it that way. So there are opportunities for us to be able to, as I said before, help designers become more effective by learning methodology or interpreting outputs or debugging problems, right? That would be stage one. Stage two, you can help them with generating content. Now, this content can be RTL for a digital circuit, it can be new tests, for example, for a test environment. It could be all kinds of things that you can generate that today are solely generated by end users, by designers.
Now, does that mean that the designer is out of the loop? No, but then again, how many designers do you know that really take pleasure in repeating the same kind of circuit 100 times over, over the next few years? You know, generally, solving tough problems is what designers get excited about. So if we can help them do that faster and take away some of the repetitive nature of doing things, I think we'll actually help them become more productive and happier with their work. And then from there, once you've delivered sort of the fundamental building blocks, you can start imagining sort of a new plateau of design solutions, where you're operating in a more abstract level.
And that's really where I think things like agents will come into play, where, you know, I, I now know how to design FIFOs or fundamental building blocks. I can have agents do that for me, and I can just give them simple instructions instead of being coding assistants. And simple instructions, like, "Just design a FIFO with these characteristics for me. I can use it for something else." And then the, the FIFO agent can go out and do that for you, or build a model that follows, you know, this section of such and such standard, and just, you know, getting started, things like that. So, some of these things are here today, certainly in the lab.
The future, I think, sort of connecting the dots to what's possible, the things that I mentioned before I think are quite possible. And then we, of course, take it from there. There are many applications, as you're thinking about future potential models that are not bound by language, that could completely revolutionize the space as well.
Still, it's interesting because as you're seeing in, on the language side of LLMs, the you know, enterprises are looking at tuning, or you know, making their own models based upon their own internal enterprise data. Let's say all the HR departments or all the contracts a company may have with its customers and so forth, to train models. Is that something that you would look at? And I guess, given Synopsys' IP, it seems like you would have maybe a bit of an advantage in this area because you have a lot to train on.
You got it. It's absolutely true. So, early on in the generative AI space, everybody got excited about models, and then I think... I'm gonna plagiarise a quote that came out really very quickly, once people started looking at these models, which was, "There's no way around models." And so I think that exactly what you described, organizing enterprise data and using model technology that's evolving by the day or the week, to get you the best performance, deployability, responsible AI use, governance, is the name of the game, but you just can't overlook the data. Really, the data defines all these other downstream decisions. It's no different for design data.
So you need to have strong command of your data, and you need to—you know, basically put your data in a position that it can work for you, and create economic value, almost like a piece of real estate around that data. And then have, of course, all the key technologies to very quickly deploy, operationalize that data, and turn it into value, right? That goes without saying. But you cannot overlook design data.
You're absolutely right, one of the things that I think Synopsys has done well, and it becomes a tremendous advantage for us in the generative AI space, is these kinds of data, whether it's from our own IP or even thinking of products as data-generating machines. This is really a tremendous way of operationalizing generative AI and putting it to work.
It's fascinating, and, you know, not to nail you down, but is this... How far into the future are we talking about here? Because we've seen some very simplistic things come out pretty quickly, leveraging large language models. But, you know, are we, are we five years away from, from a tool that can, can really help the designer actually generate, circuitry they're asking for, or, or what, what are your thoughts?
I think that when it comes to helping the designer, we're there today, meaning we already are helping the designer do things faster, better, easier. And then if I look over, as you said, a horizon of five years, I think what you're looking at over the next five years is a constant drumbeat of capability that is going to be coming in and really building up upon the prior capability set to add more value and more power to that concept. Now, in five years, where will we be? What level of intelligence and automation?
Of course, nobody has a crystal ball, but I think that this virtuous cycle between, building great data assets, making them easily deployable, and then having the designer come in and add their decision-making and experience on top of it, I think we're gonna see, some, some really, fast ramps of productivity several times over, over the next five years. And I'm, and I'm also sure that, while we at Synopsys are adding a lot of value on top of these existing technologies, and we're even researching new technologies, you know, nobody, nobody really saw a lot of the future before November 2022, when ChatGPT came about-
I know.
and I think we'll experience a few more of those moments in the next few years as well. So being flexible and being able to take sharp turns is gonna be key as well.
Yeah, I totally agree with you. It's been very interesting to see how rapidly it has evolved with LLMs. They were certainly in research papers two, three years prior to that. But they seemed to be too big to deal with and yet here we are now with the open-source models now approaching the capabilities of some of the best private models that they've spent $hundreds of millions on. So, yeah, I guess it's early, too early to say, but you know, are you thinking internally that that Synopsys will build your own branded models or are you going to just you know, leverage third party? How, how are you gonna approach it?
Well, I think presently we see tremendous opportunity to leverage existing capabilities. So whether it's hosted very large models. And what does that mean? That means that you require large computational clusters that can typically be found in sort of cloud environments. All the way to small models, I mean, now we're talking about, you know, 7 or 10 billion parameters being small models-
Yeah
... but that says something in by itself. Even small models that can be deployed in a completely different way, in environments with more traditional computation available, cheaper hardware. And then also being able to access data that's generally difficult to get to, because it's, you know, core IP or because it was just generated a millisecond ago, right?
Yeah.
So, you have to really leverage the entire gamut of capabilities, so I think a successful strategy, you know, really mandates this. And I see a ton that can be done with what's there today. In fact, we're already doing a lot of it. But going forward, I really very much see a future where Synopsys and other technology leaders, too, will want to explore more customized models that really fit our data signatures a little better than spoken language, which is what the current technology is based on.
Yeah
... and those will give us even more opportunity, right? And that's what I'm saying. I see a future cycle of several ramps of value as some of these things come into play, with quite a fair amount of certainty as well.
Yeah. Fantastic. Fantastic. Well, listen, we're up against our time here, Stelios, and Phil, I really, really appreciate it. You guys are at the leading edge, as always, or ahead, which is fantastic to see. So looking forward to more great ideas coming out of Synopsys in the next couple of years.
It was a pleasure being here. Thank you so much.
Yeah, thanks for having us.
Thank you.