Cadence Design Systems, Inc. (CDNS)
NASDAQ: CDNS · Real-Time Price · USD
336.54
+3.65 (1.10%)
At close: Apr 27, 2026, 4:00 PM EDT
334.20
-2.34 (-0.70%)
After-hours: Apr 27, 2026, 5:26 PM EDT
← View all transcripts

Morgan Stanley’s Technology, Media & Telecom Conference 2024

Mar 6, 2024

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Great. Now, let's get back to business, shall we? Anirudh, welcome again to the stage. Cadence, I had to have a look at the share price chart. It's quintupled in four years, the share price. It's been a fantastic journey. But for those who maybe just don't know who Cadence are, could you maybe reprise what it is you do, where you fit in the EDA space, and how does EDA inform the semiconductor universe?

Anirudh Devgan
CEO, Cadence

Good to be here. Thanks, Steve, and thanks for attending and your interest in Cadence. So we are basically a software company, I'd say computational software. Software companies to design, you know, we sell products, software products to design chips and electronic systems. So almost any chip design in the world today uses some form of Cadence software. And then, of course, all this importance of semiconductors in general last few years and going forward. And I think what you may already know, but what some people don't realize is how essential, you know, this design software is to design these chips. Because the chips right now could be like, you know, like 100 billion transistors in one chip. So that's roughly 1 inch by 1 inch. So that's just and they're all non-linear kind of switches, basically, these transistors.

To design them, it cannot be done manually, not even close. For years, they're designed by software. That's the software we provide to design them.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Perfect. So a big part of the design flow here for semis. And what we've heard a lot about recently is obviously the System Design & Analysis business that you've moved into. So maybe you could explain, well, how does that fit into EDA? What are the growth drivers? And how should we think of that as an important part of the business going forward?

Anirudh Devgan
CEO, Cadence

Yeah, absolutely. So, you know, the way we look at the world and this, you know, this is all you probably already know all this, but just to frame, you know, what we are trying to do. So the way we look at the world is in the world in three concentric circles. So the innermost circle is silicon. Then is the system, which could be a car or a plane or a phone. And then the data that surrounds it. And this is happening in all verticals, right? And a perfect example is like an electric car. So you have all the navigation data. Then you actually have the physical car. So that's electrical plus mechanical. And it's hardware and software. You know, that's what makes a system.

And then the chips that drive the car, you know, with self-driving or infotainment. So that's what's happening in the overall customer landscape. And then in terms of our expertise, what Cadence is good at over the last 30 years is EDA. So what is EDA is basically what I would like to say is computational software. So this is computer science plus math. This is not your regular software. It's very numerical. It's very kind of intense mathematical software. So and that's the history of EDA. That's my own background, okay? And what happens is because of Moore's Law and silicon, you know, the complexity of the software has to increase every few years. And this has happened for 30 years.

Now, if you apply computational software and you overlay it to these three kind of circles so computational software applied to silicon is EDA, which is our core business and which we are the leading provider of EDA software. And then, you know, the companies always want to grow, right, naturally. Every company wants to grow. But you have to go into an adjacent area, both in terms of customers, like these three circles, and competence. So if you apply computational software to a system and we want to do software so system software is about, what, $50 billion market overall. But about $10 billion of that is computational, like simulation, like simulating planes and trains and thermal and data centers, okay? So what leads in 2018, you know, we saw this coming, this convergence between system and silicon.

So in 2018, we have a big effort to go into what we call SD&A. So EDA is for chips. SD&A is for systems, like planes and trains and data centers, okay? And then, if you apply computational software to the third circle, which is data, is, of course, AI. Because AI, in essence, is, you know, matrix multiply. That's inference. And, you know, iterative matrix multiply or conjugate gradient. That's training, right? So there's a lot of similarities between algorithms in EDA into SD&A and into AI. So our expansion over the last six, seven years is in these areas. And of course, these are overlapping circles. So, you know, what's happening is the system companies are also doing silicon. This is a big trend, as you know, all the big data center companies, hyperscalers, and car companies. Phone companies have been doing it for a while.

So there's overlap. We sell software to system companies that are not doing chips, like to design, you know, like McLaren, you know, to design the shape of the car so that it goes fast. Even AI, you know, we can apply AI to our own products, you know, and also do AI in terms of building out the AI infrastructure. SD&A, which is we are doing for the last six-seven years, growing at more than 20% a year for us and is also a very profitable business. Because even in EDA, in general, we are very profitable, and we are very focused on profitability. But within the EDA segment, the most profitable part is simulation. Because, you know, one engineer can launch like 100, you know, do a lot of exploration.

So that's the other thing I like about SD&A is not only synergistic from a customer standpoint, from an R&D standpoint, it has a good financial return in terms of profitability.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Maybe just staying on SD&A, there was an interesting acquisition, I think, you've announced today, BETA CAE in the emulation space, it seems. But interestingly, you've touched on automotive. It looks as though that's a very automotive-geared business as well. So maybe if you could walk us through the deal rationale and maybe some of the profitability in a business like this that you could see as it goes forward.

Anirudh Devgan
CEO, Cadence

Yeah, I love the automotive space. And it's, as you know, it's going to go through a lot of transformation, okay? I mean, the three big spaces, apart from a little bit on the market, I comment on BETA, is so the silicon business right now is $500 billion, okay? And the system business is about $3 trillion, okay, electronics and all that. And there's a lot of talk about that $500 billion will go to $1 trillion in semiconductors, okay? And of course, a lot of it right now is made up of mobile and consumer PCs. And I think they will do well. But the two new growth areas that we project are data center and automotive, okay? So even for the silicon part, you know, there are all these projections, you know, at least like $300-$400 billion more in market.

Automotive, what is interesting so the data center part is well understood, right, with AI. And I can talk more. We are glad to, you know, partner with all the leading companies there. But the automotive part so one part in automotive is there is electrification of the car. So that, you know, is happening, of course. But with that electrification, what is happening is the silicon content is going up. So right now, I think roughly I mean, there are different numbers, but average, like $400 per car is the silicon content, okay? And given all this electrification and need for more differentiation because right now, most of the differentiation is in the powertrain, you know, AMG or Turbo or, you know, whatever the companies call it. But going forward, it will be more in semiconductors and electronics.

So it's projected that each car will have $2,000-$4,000 of silicon content in the next few years. There are 100 million cars sold every year. So that's $200 billion-$400 billion more of silicon content. So there will be a lot of growth vectors for semiconductors. But the two big ones and I think it's one is slightly delayed than the other because auto takes more time. They're a little more conservative, which they should be, you know. So it will be data center, a few hundred billion, followed by automotive, a few hundred billion. And that's in our core business of EDA and silicon design. But there is also implication of that in the system part of the business because, you know, there is all this overlap between system and silicon. So in the system part of the business, BETA has a good presence.

They are used by almost all car companies for structural analysis, which is one of the biggest segments in this simulation part of SD&A. So that's why, you know, it's a good size. You know, it's about $90 million in annual run rate. And they have a good reach. And it completes our portfolio in the system side. But there's also a lot of synergies back to the silicon side, given the growth that's going to happen in automotive.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Gotcha. Fascinating space emerging there, it sounds like. And just as I maybe struggle to try and understand all the sort of determinant parts inside System Design & Analysis, never mind the verticals, but it looks as though there's great opportunity in 3D ICs and maybe around this space called computational fluid dynamics. And how does that manifest as opportunity for you guys?

Anirudh Devgan
CEO, Cadence

Yeah, great question. So I mean, these are two big areas. So 3D IC, I mean, you probably know, is another way of saying, like, System -in- Package, right? If you open up the, you know, if you look at the board, you know, you'll see these black things, which are chips. But typically, they're a package. And in the old days, there was only one chip in that black. Now, there are multiple chips, and they could be stacked, okay? So it's another dimension to extend Moore's Law. It's a big trend. And I think it will be for the next 10 years. And Cadence is very well positioned in 3D IC. Because to do 3D IC well, you need at least three components. You need the IC tools, you know, analog or digital design. You need the package tools.

And you need all the analysis, like thermal and all is a big issue for 3D IC. One of the biggest issues is thermal or mechanical stress and things like that. Because once you're packing these things so densely, the thermal profile is huge. You look at, you know, even like Grace Hopper or all these chips now, they are consuming so much power in a very small package. So we are well positioned because we are the company that has both analog and digital IC tools. We have a leading position in packaging. And then over the last from 2018, we have all the analysis tools, thermal and mechanical stress. So therefore, if you look at the TSMC flow, for example, they launched about 18 months ago the 3D blox flow. So that's primarily based on Cadence solutions.

Now, we're working with all the other major kind of foundries. 3D IC is almost like a kind of a stapler between chip and system. It's like not full system, but you're moving towards, you know, this kind of complicated chiplet-based architectures. And most of the industry is going that way. You know, it started with high-performance computing with AI and all these servers. But now, in automotive, also, they are going towards chip. And there are a lot of advantages. You know, we can talk about 3D IC for a while. There are a lot of advantages of doing it because, like, even if you look at some of these big server chips, like Amazon has Graviton. It has, like, six, seven chiplets on a package. And when you go to the next generation, you don't have to design or redesign all of them.

You just some of them can be at more advanced nodes. So there's a lot of reuse. And there's a lot of efficiency in that, okay? So I think that's a great area for industry in general and for us. Now, one of the big things in 3D IC is thermal, okay? If you talk to these big foundries, they'll say that thermal is a big issue. And then Intel recently announced, like, backside metal, which is, like, instead of wiring, you know, goes from the bottom of the chip. And, you know, then all the other foundries, TSMC, Samsung, they're all looking at backside metal. So what happens when you go backside metal is that the thermal becomes even more critical because there's less silicon substrate, which is a very good conductor of heat.

So 3D IC and backside metal, which are both going to happen, make the thermal problem much higher. And then at the system level, the thermal problem is huge with all the data centers and, you know, how much power they consume, okay? Now, to do thermal well, you need both finite element and CFD. So that's the reason we went into CFD a few years ago because finite element is more kind of traditional EDA. And CFD is, okay? So to do thermal well, we had to do CFD, okay? Now, if you do CFD, then might as well do CFD in general, right? So then, you know, a few weeks ago, we launched an exciting new product called Millennium in CFD. And CFD is a big market. It's computational fluid dynamics, basically air flow, you know, liquid flow. So it has a lot of applications.

So one of them is thermal. But it has general applications like, you know, plane design, car design, you know, data center design. So it's a big market. And the way we do it is, you know, about two years ago because we want to do, you know, our we are an R&D-driven company, technology company. So we want to out-innovate, right? Of course, everybody says that, but you have to actually do it, okay? So we acquired a company out of Stanford two years ago, which has a new form of doing CFD, which is much more accurate. So if you look at the chip industry, you know, we will simulate about 99% of the chip. OK, we like to believe 100%, but you can never get to 100%, but 99.x% because these are NP-Complete problems. So when the chips come back, they work first time right.

And that's the big thing in the silicon business, right? But if you look at CFD or designing a plane, for example, or a car, and I talk to a lot of the aerospace companies, they will simulate about 20% of the scenarios. Not that they don't want to simulate the others. It's just not accurate enough or fast enough to simulate the others, right? And if you go to biology or something, they will simulate, like, 1% or 2%, okay? And the reason it's only 20% is that, I mean, they verify the other scenarios, the remaining. But they verify it through physical tests, you know, like wind tunnels and things like that. So that's, of course, not as efficient as if you do it in a computer, right, you know? And there is this big trend towards digital twins and all that.

But still, there is a lot of room there. What I like to say is in the EDA or the semiconductor business, we don't have digital twin. You know, we don't first of all, we never used that term because that was never a semiconductor term. But if I were to use that term, we would have a digital mother, you know? Because the computer is the golden representation. And, you know, you have techniques to, you know so that's the value of semiconductor. So we want to bring some of that into this. There's a big opportunity to do that in the system space. Now, going to 100% or 99% will take time. But at least, we can double or, you know, 40%, 50%, 60% coverage. Now, why they don't simulate is because it was not accurate enough, okay?

If you look at a commercial flight, when it takes off and lands, which is, by the way, the most tricky part, it's too nonlinear and too much turbulence for the traditional CFD to work accurately. So what we did to solve that problem is first, we acquired this company. It's 30 years of research out of Stanford, which has this high accuracy. They call it, you know, LES, you know, Large Eddy Simulation Model, which can cover the whole space. But it's more computationally intensive because it's more accurate. In any way, speed is a big issue, not just accuracy. I mean, accuracy is number one, but then speed. Now, to get speed, these methods, the way they work, are very well suited for GPUs, okay? Traditional CFD only speeds up a little bit on GPU.

But this new way out of, you know, this company, Cascade, out of Stanford, you can get, like, 100-1,000x speed up on a single GPU because these GPUs are also getting faster and faster. And then you can put AI on top of it to further accelerate it. So this combination of AI plus physics-based simulation, which is more accurate, plus accelerated computing with GPU can really give a lot of speed up. So one rack of Millennium is equivalent to, like, 32,000 CPUs. So that's a new kind of disruption in CFD, not just for thermal, which is how we started, but general-purpose CFD to simulate planes and cars. And so we already have a lot of customers, you know, excited.

This is the first time now, we have done this in silicon, you know, in Palladium, which is used by NVIDIA and a lot of big companies to verify chips. That's how we get 99%. The same philosophy we want to apply to verify systems.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Gotcha. So I'm going to try and summarize that because there was quite a lot you gave us there, actually. So intralayer, you've got quite a lot of EDA, the core tool set, to work there. Interlayer, effectively, is where you start to bring in CFD, pipe-cleaning it in 3D ICs first. But if you've gone that far, you move to general-purpose. So you can move into full systems. And that includes simulating Navier-Stokes equations, LES, etc. And with that, it brings you newer, bigger markets to go after. And it's Cascade, you said, was the name of the company?

Anirudh Devgan
CEO, Cadence

Yes.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Fantastic.

Anirudh Devgan
CEO, Cadence

So it's, like, two years in the making, this product, but, of course, 30 years behind that, but also combining it with GPU acceleration. So it's super, super, super exciting. But in general, what I would like to say is that and I talked about this for a while. So I don't know if you heard me before is I think in all markets so, you know, one thing is these three concentric circles. And somebody told me, "Anirudh, everything with you is three things," you know, which is true, by the way, you know? My advisor, my PhD advisor, told me the answer to life is e, you know, 2.7. So how many kids you should have? Two or three or something like that. So everything is three, okay?

So the other three things, you know, apart from the three concentric circles, you know, is three-layer in terms of and this is going to happen in all markets, okay? What I call a three-layer cake, okay? And I'll tell you why I call it a cake separate issue, okay? Because when you eat a cake, I don't know anybody who eats it layer by layer. You have to eat all the three layers together, okay? So you consume all of them together. But the three layers of that cake is the middle is what we would call principal simulation, you know, basically physical intelligence, you know, based on physics, chemistry, you know, biology, the fundamental, like, differential equations, calculus, okay? And that's what is, like I was saying earlier, that's computational software. That's our core expertise, right? Now, it could be different in some other industries.

But I think there's an aspect of physical intelligence. So and then the bottom layer of that cake is accelerated computing. You know, now, it used to be CPU. Then it used to be cloud and multiple CPUs. Now, it's GPUs, FPGAs, or domain-specific computing. And then the top layer is AI orchestration. You know, you use AI to, you know, do the data science, you know, do the not just physical simulation or physical intelligence. Use the data intelligence, okay? And that combination of the three layers is going to be very profound. And then it has to be verticalized because these are all horizontal technologies in my mind. You know, AI is horizontal technology. Physical simulation is horizontal technology. Accelerated compute is horizontal technology. But the value will be in verticalization of that. So one vertical is, of course, chip design. One could be car and system design.

The other could be, like, drug development or humanoid robots or whatever you want. So I think that's what we want to do. We want to verticalize this in critical areas for us, which is SD&A, EDA, you know, so on.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Makes perfect sense. If I could change slightly the direction here, Dassault seems to be someone you are increasingly collaborating with. It sort of makes sense given some of the changes happening in this market. But could you maybe walk us through the types of collaboration you have with them, where the products overlap, which products you use, for instance, and even the markets where you think there might be relevance for that collaboration?

Anirudh Devgan
CEO, Cadence

Yeah. Dassault is a great company. It's a great partner to Cadence. And we build some partnerships in areas that we have strong partners and areas is not our core expertise. Like, of course, we have great partnership with Arm, for example, over the years. We have great partnership with NVIDIA. You know, we have great partnership with TSMC. And then recently, you know, over the last few years, a lot of partnership with Dassault, okay? Because I mentioned these three concentric circles. And our expertise is computational software, which is more simulation-heavy in the system space. But there are other kind of implementation platforms, which we don't do or which is not our focus, which Dassault is best in class. So, like, CATIA, you know, all the planes and ships and all the really complicated things are all designed.

There's an enterprise-class platform called CATIA on the mechanical kind of aspects of it. And then SOLIDWORKS, another industry-leading platform from Dassault for all the consumer devices, right? And then they are also very good in PLM, which is so this is a great partnership. You know, from the mechanical side, they're the leader. And we believe we are the leader on the electronic side. And we also have a good, very good, strong business in PCB, Allegro, which is a Cadence brand for a long time, widely used in the industry. So this electrical-mechanical convergence, apart from simulation, it's also true from the design side. So that's the partnership with Dassault to leverage that full solution across both of them.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Perfect. Makes sense. Maybe if I change to IP. Design IP seems to have been a growing part of the business this year, some of us looking at that as maybe a function of growing use of HBM, perhaps, and AI. But maybe you could help us understand, how is that business growing and doing so well?

Anirudh Devgan
CEO, Cadence

Yeah. I think IP means for, of course, a lot of you may know already. But IP, who don't know, is not only you sell the software, you sell some pre-made blocks, you know, like on the chips. So, like, some interface IP, like DDR or PCIe or HBM, that the customers don't have to redesign because they're standard-based, right? Good example, of course, for CPU is Arm, right? So we have traditionally not done as much IP. You know, we focus more on the software side. But I think, and because we have an IP business, which is, I think, 12%, 13% of our revenue, because I wanted to make it more profitable because historically, IP is not that profitable for a lot of reasons. But I think now, I'm happy with the profitability over the, you know, worked on it, you know, takes for several years.

And also, our scale is getting bigger. So that's one thing. The second thing is that there is a lot of new, especially driven by AI and data center and all this disaggregation of chiplets. You know, there are special IPs, like UCIe, which is connect one chip to the other chip, and then HBM, of course, to memory and DDR and PCIe. So there's a lot of growth potential there. So I think if we had a good Q4, like you mentioned, and this year also, we expect good growth in that IP business. So it's a combination of what has changed in the market and our own kind of, you know, internal operation of the team is much more efficient.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Gotcha. OK. Makes sense. Maybe talking about AI, it's not just that you're designing for AI chips. There's a utilization of AI as an enablement of your tools. And, you know, we saw that with Cerebras as one of your first products out the gate. That seems to be growing quite nicely through 4Q. But what is your expectations for that whole business line to grow in 2024? And where is that being adopted, do you think, in particular?

Anirudh Devgan
CEO, Cadence

Yeah. I mean, that's a huge topic, right? I mean, at this point, almost all our customers are using our AI portfolio. So if you look at the three-layer cake, right, the top layer is AI orchestration. So that is there on the EDA side and on the system side. So at this point, we have five major platforms in the top layer for kind of generative AI. And AI itself, I think, will have, like, three phases of adoption, just like any new technology. OK, back to three, right? OK. So the first phase of adoption is, like any new technology, will be, like, infrastructure, which is what is happening now, right? You know, just like what happened with the internet.

Of course, it's a huge so there, you know, a big portion of our business is selling our, you know, our regular EDA, IP, and SD&A to the AI infrastructure companies. Of course, NVIDIA being, you know, the remarkable partner. They are, of course, doing phenomenally well. But there are all these hyperscalers. You know, there's AMD and all the hyperscalers. Of course, we work closely. We announced last year with Tesla, which is also kind of a, you know, self-driving AI chip, slightly different. Then all the, like, all the other car companies that are doing silicon. So that's a big so that's the first thing is the infrastructure build-out of AI. We benefit from that. We are central to kind of that happening, okay? The second part is what you mentioned is applying AI to our own products, okay?

And because of the real value of this kind of generative AI, it can automate things that we could not automate before. So the question is, why didn't you do this before? Because, you know, we didn't have the technology to do that. So, you know, we have a long history in EDA for automating things that goes back 30 years because these chips are so complicated, they have to be automated. But what we didn't do is we never automated the workflow. So what happens is we have these complicated tools that do a lot of work. But they run for, like, one or two days. But the way the user is designing chips, they are not running it one time, right? They run something. Then they change something. They run it again, change. This is a natural design process, right?

So I was looking at one CPU we are doing for an auto company. And they had, like, just to give you an example, had 17 variables that the user is changing. Some of them are process options. Some are tool options. Some are design variables. So that was still done manually, still recently. You know, the user knows, OK, I designed this CPU last time in 7 nanometers. Now, I'm doing it 3 nanometers. So this is what it should be. And it takes, like, six months to do it, you know, a group of 30, 40, whatever the number is, based on their intuition, okay? Now, you say, why don't you do it mathematically? You know, you're saying you do computational software for 30 years. Why didn't you automate that, okay?

The problem is, if you do it mathematically, traditionally, okay, there is something called design of experiments, you know, and statistics to do it. But if you just do exhaustive design of experiments on this, it takes 4 million runs. So there is no way, you know, one or two days times 4 million, you know? But now, with GenAI and reinforcement learning, we can do it mathematically, okay? And, you know, it builds a model. And then you do that to traverse this design space. And so in this case, the example I gave you that would take 4 million runs in the traditional kind of brute-force design of experiments would take only 200 runs with AI. Now, that's to say 200 is still a lot compared to one. But then you can parallelize some of it. So we run it on 10 machines.

Then there is a way to make it more efficient because the earlier runs doesn't have, you know, there are ways to do that. So in the end, on 3x-4x the cost on 10 machines, we can do that mathematically. And it gives better results than now, of course. It's much more efficient. You're taking one or two weeks versus, you know, three months. Now, they still iterate. You know, that's human nature. It's not that they are done. But that iteration cycle is much, much better. So the productivity could be, like, 5x-10x better. But what is really more powerful than that, I believe, because productivity is good. But what is even better is that the result is better than what can be done by the human. Because it's very difficult to optimize anything in that many dimensions.

So in a lot of cases, the result is better than a now, if the designer had a lot of time and for each block, maybe they can do it. But normally, there's a distribution of talent in any organization, right? So we can sometimes we have never been worse than the best designer. Most of the time, we are better than the best designer. But then there's a whole distribution. Sometimes, we are way better than, you know, 10%, 20% better than, which is huge, by the way. So if you're going from one process node to another process node, spending all this, you know, you know how much they cost. But we can get half of that benefit by better software or better AI benefit.

So it's significant value that is created by this applying AI to chip design because I think chip design is always very developed but never had this workflow optimization. Some of the other industries have to reach that first level of automation and then apply. So I do believe that applying AI to chip design is a very good use case. And we are doing that with our partners, like NVIDIA and other companies, right? So they themselves are applying it internally. And all our top 10, you know, customers are using, you know, like Renesas. We talked about Samsung. We have, you know, Intel. We have collaboration with all of these companies. And they have talked at our conferences and publicly disclosed the benefits of. So that, to me, is an incremental value we are providing. And you have to, you know, not only get the base tool.

Let's say, in Office, you have to buy the AI, you know, Copilot or the thing that drives the base tool. So that's the second kind of value of AI is applying. The first is in the infrastructure. The second is applying to our own tools. And the third is, like, what new markets it will enable, just like the internet. In the end, you know, it enabled new markets like, like Facebook or something, you know, which were not possible without the internet. So I think AI will also go through these three kind of phases. And we are deep into the first phase. I think we are starting the second phase because it takes a while for these to be deployed. Even though the value is huge, it normally takes a few years to deploy these new tools.

And then the third thing, which may take, like, five to 10 years, five, seven years, a little longer horizon, is new applications. And I believe one of the—I mean, there are multiple new applications. But one of the biggest ones will be digital biology, okay? And so two years ago, we invested in molecular simulation. And, of course, this is a little longer-term thing. But I think it's good to invest, you know, before it's too late. And I think AI and simulation can play a big role in biology too. See, just like chips, we are simulating 99%. Systems, we are simulating 20%, 30%. Molecules, we are simulating 1% right now. It's all so I think that's the—but that will take longer time, okay? But that's the sequence in my mind of AI adoption.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

But it's fascinating that you can still see that scope to capture the value as we go to that third stage and move out to digital biology, as you say. Quite interesting. I'm going to open up the floor, see if there's any questions here. I think he put his hand up first. Sorry, guys. Maybe you were quicker. I just didn't see. Sorry.

Speaker 3

Can you talk a bit about the competitive landscape? Of course, Synopsys is acquiring Ansys. Does it give them capabilities that maybe you currently don't have? And so I'd like to have your opinion on how the competitive landscape is evolving over time.

Anirudh Devgan
CEO, Cadence

Yeah, sure. I mean, we started this journey, like I mentioned, in 2018, you know, because of these three concentric circles and, you know, this SD&A. You know, at that time, people asked me, hey, Anirudh, what's this? Doesn't make any sense. You know, EDA companies will do system design and simulation. Okay. So I think now, you're seeing other people realizing the value of that. That's the way I would like to say it. But we were doing pretty well anyway in the market. And we have a pretty complete portfolio. I mean, one area that I wanted was to make sure we have is structural analysis, which is the acquisition we announced today. But overall, I think we are doing well. We have newer products. You know, I feel very confident in the competitive nature.

You know, our SD&A market, just to give you an example, just by numbers, so we are growing. I think last year, in 2023, we grew 22% because now, this is a significant part of our business. So we can actually that was not the case in 2018. But now, so we are growing 22%. And I think last several years, we have grown 20%+, even though that overall market is growing much less than that. So that's just proof that our products are good and the customers are, you know, we are able to win. And so I feel pretty good, you know. And we always say organic is delicious, you know? So we want an R&D culture of organic innovation. Now, of course, we do some inorganic. You know, why not, right, if it can turbocharge us?

But I think we want to be a technology-focused culture with organic innovation. And I think we are well-positioned to do that, yeah, and also a good return for the shareholders. You know, it's to do some inorganic. But most of the growth can be organic.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Great. We had two over here.

Speaker 4

So sticking to this concept of three, you've been a great organic story. You've been a great compounder of shareholder wealth. So thank you on behalf of our unit holders. We've been a great long-term shareholder. But what are the three things that, you know, we should worry about longer term for your business?

Anirudh Devgan
CEO, Cadence

Oh, wow. Well, I feel that, you know, if I look compared to, like, five, six years ago to now, I think we are better positioned than we were. So I think I don't know. I mean, make sure you have enough allocation because I think sometimes, you know, I would like to say that, you know, we are doing R&D software, right, if you think about it. You know, we are doing design software. It's R&D software. And sometimes, people say, oh, are you a semiconductor company? But you're not. You're like a software company. Are you really a software company, like SaaS? We are not. So I think sometimes, you know, not enough people know, like, our characteristics. Of course, more people are getting to know. But the value is that we are tied to R&D rather than revenue. So there are some benefits to that.

Now, the flip side of that is, you know, R&D is more, it's always, you know, there's not that much fluctuation up and down. So, like, that's the other thing. Like, some companies who are directly tied to production could see a much faster impact from AI, which is well-deserved, right, like the great partners we have. So we are more that way, we are more stable but maybe often a little, you know, you have to pass it through a low-pass filter. You know, we are more so you have less variation. So less. But I think that, to me, is a feature, not a bug. But that's one thing the investors should remember is that we would like we like to be a compounder of value, so consistent growth and good margin. That's what I would say.

I mean, this main thing for us to worry about, I guess, is the main thing I worry about. I think in the earlier days, I would worry about, like, 10 years ago, you know, what's happening to the semiconductor market because 10 years ago, we were worried that there would be a lot of M&A in our customer base. And, you know, you know, that was happening. But now, you know, the resurgence of semiconductors is much so what we have to worry about actually is just being best in class, you know, make sure our products are good, you know, we provide value to our customers, focus on R&D, focus on the team. You know, we always say team, technology, and customers because everybody that's three also, okay? And everybody says customers, which is true, by the way. Everybody wants to be customer-driven.

But if you don't focus on the team and the technology, the customers don't like you, right? So we want to make sure that we have the best team. And there's a lot of kind of changes in the workplace. Make sure we can hire the right people. And there's a lot of changes in technology. So I think as long as we are relevant, we are in a good market too. So I think that, to ask me, to make sure we have the alignment, we say win with the winners because all the customers are very valuable. But there are some customers that are driving the entire industries, like you know, right?

So to have partnership with, like, with NVIDIA or with Arm or with TSMC, you know, on the system side, so, you know, to align with the really game-changing customers and a lot of other customers I can't talk about. But they are equally important, okay, and then have the right technology and the right team.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

Maybe your cell phone.

Speaker 5

Could you speak a little bit more about the hardware verification business and the drivers? Does it become compulsory as customers move to smaller nodes and more complex chip architectures?

Anirudh Devgan
CEO, Cadence

Yes, yes, yes. I mean, just in hardware verification has seen this huge growth over the last few years, you know. And the reason for that, as you know, is this kind of need for first-time right, this 99% or close to 100% verification that is essential for chip design, otherwise too expensive, right? And the reason, the way we do that, like, in the old days, we would do, like, silicon design. Let's say this is silicon design. And then you would do soft system or software design, okay? So then, you know, do silicon design for three, four years. The system part would take one year. And the chip, the system would come out, like, in, like, a mainframe or a server or whatever it is, car. So that's a very long process. But nowadays, like, if you look, the silicon comes out.

Within three months, you can get the systems, right? And so the best-in-class companies, they will overlap silicon and software development, okay? So that means that you just finish silicon. And then the system comes out, right? This is true for phones. This is true for, you know, AI chips. It's true for a lot of things, right? Now, how is that possible? It's when you're doing software development. Sorry for my, you know, you don't have no chip. So you have to emulate the chip. So we sell these Palladium and Protium systems that will basically mimic the chip even before it is manufactured. These are, like, supercomputers, basically. And NVIDIA is a great development partner on Palladium for the last 15 years, okay, and then a lot of the other big companies.

So then what happens is you can, first of all, verify the chip that is correct. But also, you can boot Windows or Android or, you know, whatever your software stack is. So when the chips comes out, it is functional. So this is a beautiful thing. And we build a custom chip in TSMC. And there are 144 of them in these supercomputers. It's liquid-cooled, okay, to emulate. And it runs 1,000 times faster than a typical, like, a CPU would run for this kind of application. So now, as more and more people go to advanced node, bigger chips, you know, all these AI chips, it's almost given that you have to use hardware. Otherwise, you cannot do first of all, you cannot verify the chip properly. And you cannot write the software. And then all these system companies, the big hyperscalers, do chip design.

Of course, they are system companies because they have software, right? That's the first definition of system company along with hardware. So then all this software can be boot up on these so that's why, over the last few years, our hardware business, which is Palladium, Protium, has done very well. And almost half of our verification business now is hardware-assisted because you can accelerate by 1,000x. So that's why we went into CFD this way because what happened in chip design should happen in other areas because hardware acceleration can provide a huge boost from going from simulation to emulation. You know, that 1,000x changes your use case. See, you can boot software. And you do all kinds of things which you cannot do on a CPU chip. So.

Speaker 6

What kind of chips are in the hardware?

Anirudh Devgan
CEO, Cadence

We design our own chips. This is a special Boolean processor.

Speaker 6

It's more like an ASIC.

Anirudh Devgan
CEO, Cadence

It's FPGA. Oh, it's proprietary architecture. That's why we are the leader in this.

Speaker 6

Property of you to design that chip?

Anirudh Devgan
CEO, Cadence

Our own? Yeah. That's the advantage also. I mean, but and then we make it at.

Speaker 6

Percentage of chips, like, the big AI chips are doing that and chips that are going, like, is there a greater percent of chips that are going to be needed to use the hardware? I don't hear anything.

Anirudh Devgan
CEO, Cadence

Yes. So the reason, okay, there's a lot of good questions there. So, of course, we are a system company in that way because that thing is not just a chip. It's the whole system. It's a rack. And also, for the really big chips, they connect, like, 8 racks together or 16 racks, all connected on InfiniBand, okay, which is super, super high-end, liquid-cooled. I mean, this is, like, if you visit our lab, you will see this is super-duper, like, and then all the IP has to work. And it has a lot of optics also, all kinds of stuff. Now, those things have certain capacity, you know, billions of gates, okay?

So what's happening is when we go from, like, 3 nanometer to 2 nanometer to 1.4 to 1, okay, this is going to continue for the next 10 years at least, right, all this migration. So whenever when you go from 5 to 3 or 3 to 2, necessarily, they may or not be getting faster. See, this is the whole debate of is Moore's Law dead or not, okay? Moore's Law is dead in some ways that it's not classical Dennard scaling that they are getting faster. But one thing is happening is that they are getting more area-efficient. So you can pack more things in the chip. That's why we went from 1 CPU to 8 CPUs to GPUs to neural engines, right? So the size of the chip is increasing significantly when you go from 5 to 3.

And then the amount of verification you run goes up exponentially because if the size doubles, you know, it goes up the combinations. That's why there is more and more hardware verification needed. You need more boxes. You need more use cases. That's a good place to be, yeah.

Steve Byrd
Global Head of Thematic and Sustainability Research, Morgan Stanley

It's a great place to stop. We should carry this on offline. But we're eating into next place.

Powered by