Cadence Design Systems, Inc. (CDNS)
NASDAQ: CDNS · Real-Time Price · USD
336.54
+3.65 (1.10%)
At close: Apr 27, 2026, 4:00 PM EDT
334.20
-2.34 (-0.70%)
After-hours: Apr 27, 2026, 5:26 PM EDT
← View all transcripts

Morgan Stanley Technology, Media & Telecom Conference

Mar 5, 2025

Moderator

All right, so good afternoon, everyone. We've got Cadence up on stage next with Anirudh, the CEO, here to join us. Thanks for coming along, Anirudh. Just before we do so, I've just got a quick disclosure to read out. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales rep. So with that put to one side, Anirudh, thanks again for coming along. Really tumultuous time. We've seen quite a lot of news flow in AI. Obviously, we had NVIDIA on the stage earlier. But maybe if you take us through, first of all, what do you think DeepSeek has done as impact on this sector? And then more broadly, what are your views on AI right now, and what is it impacting on EDA as you currently see it?

Anirudh Devgan
CEO, Cadence

Absolutely. Thank you for starting with a simple question. A question nobody's interested in now. This is a great point. OK, it's a big thing. One thing is EDA, I've said this before, what we do is computational software. So it's CS plus math and has evolved over 30 years without getting into the details. And even my own background is in numerical analysis. And AI will go through the similar evolution, and maybe much faster, but similar evolution. So just to give you an example, when EDA started in the 1970s, it was doing dense algebra, which is the most inefficient way to do things. And then it became sparse, then it became partitioning, then latency. There are well-known numerical methods to optimize things, which AI will go through.

So when AI started, and of course, the last few years, it's such a gold rush that things were done, but they were done a little bit brute force in terms of computation. It's like a dense multiply. And EDA used to do dense inverse and dense multiply without getting too technical. I mean, AI is mostly dense multiply. So by nature, the software people always innovate. I know silicon is important and hardware is important, but I am, even though we sell to semiconductors and hardware companies, we are software. So software folks always innovate at a high speed. So I believe that there will be multiple DeepSeek moments. And this is what some of our, and I asked our big customers, this is what they believe. So what DeepSeek, now, some things people argue, well, did they really use this many or that many? OK, I don't know that.

Leave that aside for a moment. Nobody knows. But now they have fancier terms for them or more layman terms, like mixture of experts. But mixture of experts in classical computer science would just be partitioning. So what they do is partition the system into 10. And if only one is active, you evaluate one. You don't evaluate all 10. This is like a 50-year-old method. And there will be other such methods, like latency, like partitioning. So I believe that this is a natural good progression, and there will be multiple of them, because the software guys always innovate. Now the question then you say, OK, if that's going to happen, what does it do to compute? And so again, I asked our big customers the same question. I tell you what they say. So what happens is compute, of course, naturally moves towards inference, training.

On the training side, what I see happening is that training cost hopefully gets better or efficient, but then also requires more domain-specific models. Because if the training is more efficient, one can do more models. Otherwise, if the training cost was too high, you could not do domain-specific. And then you could do a lot more on the inferencing side. So then I talked to some big customers. They still project that their compute will go up for the next several years. And this is not dissimilar to what happened in CPUs or something like that, or what happens in personal computer, or what happened in mobile. So right now, my expectation is, and most people's expectation is, compute will still go up. Now, from a Cadence standpoint, because we don't sell compute, we use compute. So we sell things to design things.

And I think that will go more and more, because we're not directly tied to volume of silicon consumption. We're tied to design activity. So as the models mature, I think there is more need for inference, and there is more need for domain-specific compute, which requires more design activity. So that's on the DeepSeek side. And I also believe that this also proves that smart people can do things everywhere, and there's not a good idea to restrict too many. This backfires. And also, AI by nature, if it's matrix multiply, is embarrassingly parallel. This is nothing earth-shattering I'm saying. So also, very embarrassingly parallel, you can have more efficient things, but you can use more of them. So anyway, there's a whole thing of, I think AI, all the world countries will do AI, and it is difficult to stop it in a particular way.

Now, on the general AI in general, impact to EDA and other things, so I always believe the real value of AI or any new technology is not horizontal. It's always vertical. I've said this for years, because horizontal technologies, especially in software, will become more and more prevalent. Like now, there are so many models, and there will be more. And now the question is, will there be a super intelligent model that will be dominant to all the other models? OK, that's extremely unlikely. That doesn't happen in software. There are so many smart people. So then there will be a lot of models which will be roughly same, like having a V6 engine. One V6 engine is slightly better than the other. But the real value is how good is your car? So that will move to applications.

So the first phase of applications is, of course, agentic AI and all this. And we have applied AI to our own products for a long time. And the real value for EDA, and this will be true in other industries, is workflow automation. I can talk more about that, not just one run, but how to automate the entire workflow. So we'll see how we already have progress. We can talk about it. But that's on the software side, and that will happen to all software. But I also think that that is going to be exciting. But what can be even more exciting than software monetization? And this is a big topic, right? How will AI monetize in software? It's what I've always said for some time that there will be three phases of AI, in my opinion. Again, this is conjecture. We'll see what happens.

But I always believe there are three phases of AI. The first phase is the infrastructure phase and this agentic phase, which is hardware infrastructure and software application. But to me, the horizon two, which is, so let's say this first horizon is one to three years, which we are in it. But horizon two, which let's say three to seven years, some people think sooner than that, is I always believed is physical AI, which is cars, drones, and robots. And then the horizon three is science is AI, like real science, like physical science or material science, or most importantly, life sciences, biology. And I think the horizon two and horizon three have potential of even being bigger and being in trillions of dollars, not in hundreds of billions. So I want to make sure, as Cadence, we are also plugged into the physical AI part.

From chip design is different. The models are different. The simulation is different. So I am excited about that. And we're already seeing signs of physical AI design happening right now, even though a lot of talk is on the data center stuff. But I think the real monetization could happen there. So I'll stop there.

Moderator

Oh, no. I mean, it's always great to get your impressions on this, and the idea of applications and verticalization comes through, and those three phases, maybe if I stay with that just for a little bit, so as we transition via agentic AI into this physical AI space, how does that transform your business? What are we going to see when cars become probes and then we have humanoids? How does that impact EDA and Cadence?

Anirudh Devgan
CEO, Cadence

Yeah, absolutely. So I've talked about for a while, I mean, again, this can take some time. We talk about this three-layer cake. And people say, like, why are you talking about cake? Is there a Cadence Bakery or something? And I'll tell you why I talk about cake. The cake is that there is the AI on top. So what is AI? AI is a fancy word for data-driven science. You give me the input and output, I'll tell you what the function is. That's all AI is. My dad was a mathematician. They would call that curve fitting. Now it is autoregressive. That means you automatically fit it. So you give it x and y, it will tell you f. So that's the top level. The middle level of the cake is you tell me f. I know f.

You give any input, I will tell you the output, so how do I know f? Because I know it's made up of transistors or molecules or physics, so this is the fundamentals, ground truth, like NVIDIA calls that ground truth, and the third bottom layer is computing, accelerated computing. It used to be x86. Now it's ARM, CPU, GPU, FPGA, so these three layers are going to be there, and the first two layers merge. What is classical methods versus what is data-driven methods will merge. One issue is people who graduated recently, they say, well, I don't need the middle layer. I just fit everything. I'll tell you the answer, and the people who graduated 30 years ago, I say, I don't need the top layer, man. Just tell me f. I will tell you that, but what I believe is you need both.

You need the AI, and you need the fundamentals, and you need the bottom layer. Now, if this three-layer cake now, I say, why do you call it a cake? The reason I call it a cake, most people I know don't eat cake layer by layer. I could call it a stack, but the reason I call it a cake is when you eat the cake, you eat all three layers, so what I mean is, and I believe anyway, the applications will be vertical. The value is all vertical, not horizontal, so when you have vertical application, like data center or physical AI, you will have all three parts of that, the AI part, the fundamental part, the data part, the fundamental part, and the compute part, so that's definitely true for data center AI. We talked about DeepSeek. It's changing that.

And you still need the AI part. You need the algorithms, and you need the GPUs. But that's true for the physical part, too. And you see that already. So first of all, the chip is completely different. And you see that with Tesla. You see that with NVIDIA. You see that with all the car companies. The chip, first of all, is power limited. It's not as much power. It's not one kilowatt or something. Also, CPU and GPUs are on the same chip. Look at all the auto chips. They are very different. So they have to be redesigned. The AI part is different. It's a world model rather than a language model. There's all this talk on that. And then, of course, the robotics, all these systems, they're not just AI.

If you look at Waymo or you can look, it also uses control theory, which is very specific to robotics and self-driving. Not everything is in these self-driving systems. Of course, there's a lot of AI, but you still need the fundamental control. And then you need to do digital twins and simulation of robotics and cars, which is different. So I want to make sure that in all these, so there's these three phases and then three cakes, three layers of the cake. So that's the three by three matrix. So I want to make sure we are relevant in all the three phases and all the three layers. So for Cadence, it is very relevant, because the physical AI, the chip design is different. What Honda or Toyota or Tesla or Rivian is designing, the chips are different. The simulation of digital twins is different.

The AI models are different. And the same thing, we invested a few years ago, two or three years ago on life sciences, because that will be different. And then the way we invested is in the middle layer also. See, because also a lot of times people forget the importance of the middle layer. Even if you have self-driving, if the car is not good, nobody's going to. And then same thing, if your basic algorithms are not good, AI is not enough by itself. So even in life sciences, there are a lot of life sciences efforts. But first, you have to make sure the modeling of molecules is correct before you do AI and do drug discovery. So there were only a few. There are only three companies that do accurate modeling of molecules.

And we bought one of those companies, because we know how to add AI, and we know how to do. And those things run on GPU very effectively. But you need the middle layer to be accurate. That's what gives the accuracy and ground truth, and then AI on top. So yeah, I want to make sure. Of course, the first phase is critical. Infrastructure will be huge. But we also want to invest for the second phase and the third phase. But proportionately.

Moderator

Yes. So a lot of things coming down the pipe here, including control theory, obviously different layers, just to allow for at least the emulation of some of the things coming.

Anirudh Devgan
CEO, Cadence

And design, not just emulation. Also, I mean, we have all these companies we're working with to design these chips for the physical AI. Even though automotive is slightly weak right now, but the design activity is there, because a lot of these things will come a few years later.

Moderator

Makes sense. All right, let's maybe bring this back to where we are today. So if you look at, you've had a great fiscal year 2024, moved into 2025. And can you just remind us again how the year finished, the makeup of the backlog, and what we're looking to see as we look out through 2025?

Anirudh Devgan
CEO, Cadence

Yeah, we had a great year. And we ended, I think, in 2024, 13.5% revenue growth and 42.5% operating margin. So that's more than Rule of 55 in the Rule of 40, more than 55 in the Rule of 40. And then we also, because we have some new products and hardware products. And in general, we had a record booking year and quarter. So I feel very good about that. And our guide, I thought, was very good. We're always a little prudent. But our guide is the best guide we have given. And it's like almost 43.75% margin and 11.5%- 12% revenue growth. So I think that's the strongest initial guide we have given, which is more than 55 in the Rule of 40. Now, we were a little prudent on China, which proved to be true.

I mean, there is so much uncertainty. But overall, I think the business is in good shape. And we're always building for the long term. There's all these design trends and all that. There is some uncertainty. And anyway, in the beginning, we are always prudent when we guide.

Moderator

And that was borne out last year as well. And maybe if we stay with China, what would be the puts and takes on that guide that you gave for China?

Anirudh Devgan
CEO, Cadence

No, we assume that China is, sorry, we assume that China is flat, which, of course, it has grown through the years. Now, last year was more difficult. It was down last year. So this year, we assume it's flat. It hasn't been down like two years in a row for the last 25 years. But we didn't assume that it will grow like Cadence average or something. So ex-China, our growth is even better. And the reason to assume it's flat is all this uncertainty. And on one side, there's a lot of design activity. Like we're talking about physical AI. By the way, China is in a fabulous position in physical AI, in my opinion. There are more than 100 robot companies in China. There are at least five, six major car companies that are actually in a very, very good position.

And that's also good for our system business and EDA business. But on the other hand, there are some regulatory concerns. Will there be more restrictions on AI design and all that? Or like HBM, there's all talk of there may be some restrictions there. So then you're just prudent to assume that it is flat.

Moderator

So just good old prudence at this point, basically. That makes sense. And I think if you take that out, I think the core EDA business, non-China, does look as though it's suggesting maybe high single digit, low double digit sort of growth. So what are the elements driving that factor this year? Are we seeing slowing activity in the non-AI space, for instance, as headwind?

Anirudh Devgan
CEO, Cadence

First of all, this is the initial guide. We'll see a lot of it is inferred from the guide. And we'll see what happens, because we would rather be more prudent than positively surprised than the other way around for obvious reasons. But in terms of all our environment, I think design activity is strong. I think what you know has been there for the last two years is that our customers said the revenue in 2023 was down. Some of our customers, like semi customers. And 2024, some of them were up, but a lot of them were down. So there have been two tough years for the semiconductor industry. And so for that reason, we also but that affects some of our growth. Like 2023 affects 2024 and all that. But going forward, 2025 seems to be booming.

I mean, there are a lot of experts here in the room who know the semi market and the system market. Now, parts of it are, of course, fabulous, like AI and data centers. But the rest of the companies looks like it was going to be first half 2024, second half 2024, looks like it's 2025. To the extent that borrowing happens and the customers invest more and more, they always invest in R&D. But of course, it's a better environment if the revenue is growing to invest more in R&D.

Moderator

It's interesting that you talked about some of the underlying demand still in China, and you could see in your bookings a huge bookings quarter in Q4, backlogs, record levels as well, but upfront, only growing 7% or 8% this year, so all this suggests that it's yet to come, 2026 perhaps, so can you give us a sort of sense for how 2026 might grow after this boring in 2025?

Anirudh Devgan
CEO, Cadence

No, we are very careful talking about future years. I think we are confident about 2025, and I think it's a very good initial guide. It's a very good margin, best in class margin, so we'll see how, I mean, we'll keep you updated through the year. We're never given multi-year. It's very difficult to predict. Yeah.

Moderator

That's fair.

Anirudh Devgan
CEO, Cadence

But thank you for the question.

Moderator

Anytime. So I guess if we try and look at particular areas where you see excitement emerging, we've talked about physical AI already. But one that often gets perhaps overlooked is edge AI. Are we seeing a lot of activity there? And how do you think that'll come through in numbers?

Anirudh Devgan
CEO, Cadence

Oh, absolutely. I mean, some people could call physical AI edge AI. The reason I differentiate it is that part of edge AI is also infrastructure to me, like laptops or phones. I don't think of it as physical AI, but it could be edge AI. Like laptops are going to get completely transformed. I mean, you know that anyway. And I think we will see a lot more exciting announcements this year. And phones, of course, will get transformed. So is that edge or not? And then cars, robots, and drones. So some of it I put in infrastructure. Some I put in physical. The reason I put it in physical is because, see, there is one school of thought that AI will become super intelligent. I'm all for that. Why not? If that happens, it's good for our business, good for that.

But it is not super intelligent yet. I hope it is. And nobody knows, by the way. If somebody predicts that, God bless them. But what AI is good at is definitely good at perception. It is good at seeing and talking. It's remarkable. You can understand. You can perceive perfectly. So what I believe is for physical AI to be successful, even the current AI, or slightly better than current AI, can make that happen. That's why I'm optimistic about physical. But there will be other edge AIs, like laptops and phones and MCUs and all the semiconductors. They will all, of course, go through AI injection.

Moderator

Of course. Maybe just come back to products. I mean, Z3 and X3 really doing very well. Clearly, a space for these, given the amount of parameters that they can emulate, for instance, as well. So walk us through the interest there and how you see hardware supporting growth in 2025.

Anirudh Devgan
CEO, Cadence

Yeah, so folks, you probably already know, but just to level set, we are mostly a software company. But we sell some hardware, which is super critical for design of all these systems. And in the last few years, it's become even more critical. So none of these advanced systems can be designed without the kind of products we sell, which is Cadence Palladium, and Protium. So what they will do basically is, even before you have silicon during the design process, it can emulate that and present a view to the software person as if the thing exists. So then you can overlap hardware and software development. So you're writing software before there's any physical chip. And so it runs like 1,000 times faster than if you were just simulating around CPUs. It's still slower than real life, because real life is much faster.

It does make it possible to emulate. And what happened is that there is no other way to design these things now, because they're so complicated. Because if you wait till the end and then write software, first of all, if you wait till the end, you don't know if it's first time right. Like you spend all the money, send it to TSMC or Samsung or Intel, you don't know whether it will be right. So it makes sure that the chip is right. And second thing it makes sure you can write software. And the way we do it is we build a custom chip. Going back to that three-layer cake, we have a domain-specific chip. It's like a boolean supercomputer. This thing is quite interesting if you want to see a supercomputer being built.

So each rack will have more than we don't disclose exactly, but much more than 100 chips. And these are full reticle chips, liquid cool, optical, InfiniBand, all the latest grades. And then you can connect 16 of them together, which is Palladium Z3. This is our latest system. And as a result of that is you can emulate, you can basically mimic like a one trillion transistor chip. And Blackwell, which is, of course, NVIDIA's development partner in Palladium, is 200 billion. So that's designed on eight racks of Z2, previous generation. And now we can do 16 racks of Z3. So it went from 200 billion to one trillion. So that should support the industry for several years to come. And we are the only company that designs this kind of supercomputer by ourselves. And we have great development partners.

So we released that middle of this year, middle of 2024. And that also caused a little bit of a shift of revenue to the second half last year. And we released it about three years after our previous version. Normally, these things are pretty complex. They take longer. But I think three to four years is a good kind of cadence for that product. So we are now. That was very successful. I mean, that's a lot of the, of course, our bookings were good across the board, but definitely in the hardware segment to drive all this. And it's become almost like a de facto standard in most of the advanced AI chip design.

Moderator

Makes sense. I'm always thoroughly impressed that the software company developed and run their own ASIC project. So congratulations to you guys. I'll open it to the floor in a second. Just one last question I wanted to ask. We hear you talk about digital biology and life sciences more broadly. And I think you did mention OpenAI as a project that you were involved in last quarter. So it does seem like it's coming into view. Is that the right way to look at this? Or when is the timeline for this to hit sales?

Anirudh Devgan
CEO, Cadence

Yeah. No, it's doing well. It's growing better than it's doing good. It's still small. Now, the trick there is if you believe in the three horizons like I do, the question is, when do you invest in the third horizon? So you don't have to be too early. But if you're too late, you can miss it too. So to me, it's a proportionate investment. It's not, OK, 90% of R&D investment is still in the main stuff. Just 1% or 2% so that we know what is happening. We know how things because you don't know till you actually do it. So it's a good acquisition and good investment, but it's proportionately small. So it's like an option you're getting for free with Cadence. And then we will watch the market. And of course, we can do a lot of AI.

You will see a lot of announcements from us in AI. And then if there are more opportunities to expand along the way, we will do that if it makes sense. So to me, make sure because without doing it, like without practicing, you don't know what is happening. But we practice. But I believe fundamentally it can be big. And then physical AI, then the infrastructure AI. So that's how I look at it.

Moderator

Yeah, very exciting. I said I'd open up the floor. So I'll do just that. Any burning questions out there? Got one at the front here, and another one over there.

If Intel makes significant progress on 18A, would that be positive for your business?

Anirudh Devgan
CEO, Cadence

Yes, I think so. Because we talked about last year, we're glad to partner with Intel Foundry. And we ported our software and IPs to them. But right now, all the business we are assuming is from Intel, not from Intel's customers using 18A. So as Intel Foundry gets more and more customers, I think that's incremental to Cadence.

Is there a way that you could use AI to, I guess, accelerate a move from one foundry to another with an existing product set? Or rather, help your customers do that?

We are always looking at ways to accelerate product development. But we have to do it in a proper way, whatever is allowed. But we have all kinds of efforts, both in digital and analog, to automate this kind of design process. Yeah.

Moderator

Two good questions. That's killed my Intel question, which was coming next. But I think coming off your second part there, it does look as though agentic AI is happening in EDA. And that does look as though it's an accelerant for product development. How does the EDA community grasp that and really use that as a new initiative?

Anirudh Devgan
CEO, Cadence

Oh, yeah. I mean, that's a great topic. I mean, there are so many ways. See, OK, let me back up in a more fundamental way. EDA, like this going back to the three-layer cake, we spend a lot of time in the middle layer, and we have probably the most complicated algorithms. Now, everybody thinks what they do is complicated, but I can say that talking to our customers, they say that the EDA is one of the most complicated software they buy. These are big customers, so we spend a lot of time honing over 10, 20, 30 years. They have Boolean logic. They have matrix. I mean, they have all the algorithms you can imagine in the world, and they are very difficult problems to solve. We had it in EDA.

But one thing we didn't have, which is very important also. All of them are what I would call single-run environment. So you give it an input. It gives you an output. And now it does a lot of complicated stuff to give you output. And it runs in like one or two days. But the design process is not done in two days. Otherwise, the design process takes like one year or six months or 18 months. So what is happening is the customer runs the tool, this complicated tool. But then they run it again. And they change something, run it again. They do exploration of the design space. The design space changes. So EDA or design automation never automated that, which is, let's call it a workflow automation. It's not that we didn't want to do it.

We wanted to do it for 30 years or 20 years or 10 years. But there was no mathematical, there was no algorithmic way to do it. The only algorithmic way to do it in the past is what we would call design of experiments, for example. So for example, we have worked with a customer. They had 17 variables they're changing, running our tool set. And they're doing it manually, of course. They run it, change it, run it. It takes six months. If you do it by design of experiments, which is the previous state of the art, if you, of course, this is basic statistics, that would take like four million runs. Because it's completely impractical. So the reason we never did it is because there was no practical way to do it. But now with AI, you can create a model for that design.

You can do a much more intelligent search of that 17-dimensional space. So we can transfer knowledge from one run to the next run and automate the design flow more completely. We never say we can fully automate the design. So what happened in that particular example, instead of 4 million design experiment runs using AI and reinforcement learning and a lot of other methods, we can do it in 200 runs. And then some of the runs are parallel. So you can run more compute. This is what happens in AI. So instead of one or two days, in one or two weeks, you can complete the design. Now, that's good versus now, the customer will still iterate. They're not done with one iteration. But instead of six months, probably it can be done in two to three months.

But what is more powerful with AI is this workflow automation. I think this will need to happen in all AI tools to be successful. It's not just it's faster. Faster is one thing. But it can be better. It's not that the previous design was wrong. But mathematically AI is using to search a 17-dimensional space, a lot of times the result is better. It's like 10% better in power or 5% better, depending on how good the initial point was. So that's truly remarkable. Because some of our customers will spend like six months optimizing for 2%, 3% power, because they sell hundreds of millions of these things. So it's better with AI because of workflow automation. And I'm giving an example of implementation, like digital implementation. But there's verification. There's analog design. There's process migration.

So all of these, there's a new algorithmic set, which is coming from this data-driven AI approach, the top layer, that can be infused. But you have to do it vertically, keeping the middle layer in mind. So I'm super excited about that. And the whole goal always is to do some things better, either the migration better or the PPA is better or the verification coverage is higher.

Moderator

I think you've given me the title for my next note, "Searching 17-Dimensional Space." It's going to be. I think there was a question from someone over here. Thanks.

Thank you. Just a competitive landscape question. If your key competitor is successful in acquiring Ansys and is able to combine a leading EDA with leading multiphysics solution, how does that impact your competitive position, particularly in some of these more kind of vertically integrated use cases you've been talking about, such as physical AI?

Anirudh Devgan
CEO, Cadence

Yeah, that's a good question. I've said this before. I mean, we were competing pretty well with them separately. I don't know if that much changes if they're together. So we feel pretty good about our competitive position, product portfolio. Because we have a pretty diverse in EDA, we have the broadest product portfolio. We have analog, digital, verification, packaging. We have the most broadest. And in SD&A, over the last few years, we did BETA now. But we have done like five, six acquisitions. So I feel the portfolio is broad enough. And so just make sure we work with customers. And it's not a portfolio strength issue. Portfolio breadth issues. Just work with customers and deliver solutions.

Moderator

Great. Looks like we've hit the time. Sorry if we didn't answer all questions. But, Anirudh, thank you very much.

Anirudh Devgan
CEO, Cadence

Yeah, thank you.

Powered by