Thank you. Okay. Good afternoon, everyone.
Yeah.
Welcome to San Francisco. We're on the stage with Anirudh Devgan, CEO of Cadence. Welcome, Anirudh.
Thank you.
Maybe if I just kick things off, I mean, no pressure, but we were in here, all of us earlier. Jensen gave a call out to Cadence, talking, I think, really around the sort of emulation space. Of course, you talk a lot about that sim-to-real gap.
Mm-hmm.
How you guys can plug that. Maybe help us understand how does that segue, and how is this an opportunity for you guys, looking at Physical AI in particular?
Yes. Yes. Well, thank you. It's good to be here. Thank you for the interest and we love working with Jensen and NVIDIA too. We have a long-term partnership with them, of course. I think what I have talked for forever, so sorry for people who are familiar with this, is like the three-layer cake, you know. There's all this worry, you know, AI is going to replace software or something like that. The thing is that there are different kinds of software, right? There's a whole range of software. For us, you know, the reason I call it a cake and people say like, "Why do you call it a cake?" It's like a Cadence bakery or something. I'm not a good cook, you know, by the way.
I'm a horrible baker, no, definitely, not a great... The thing is, unless you are like two years old, you know, normally when you eat a cake or you consume a cake, you consume all three layers of the cake together.
Yeah.
At least that's what I do. You bake it together. What are the three layers are is AI at the top, I can get into more detail which is more like data science algorithms, you know, AI at the top. The middle layer is more ground truth, you know, physics and, you know, the good old stuff of how things actually work, like molecules and transistors.
Sure.
Then the bottom layer is, compute and data. You know, now it's accelerated compute and data.
Yeah.
NVIDIA and others. Then people who graduated last few years, they say, "Well, I just need AI, you know, give me like input and output. I will create a model, and it will do everything." People who graduated 30 years ago say, "Well, what you know is the real truth, you know, how transistors actually work and all that." The reality is that you don't need to take side of that.
Yeah.
Any side. It's both together. running on top of data and compute. By the way, this is gonna happen in all markets. All markets. The slice of a cake is of course, domain-dependent.
Mm-hmm.
It could be chip design, it could be self-driving cars, it could be robots, right?
Yeah.
First thing to remember is in our case, the middle layer is very scientific, numerical, physical. You know, if you're designing like 100 billion transistors at 2 nm, it better be accurate and, you know, you really do need to know the fundamentals. Then when AI runs on it uses more of the middle layer, which is what I think.
Yeah.
Jensen is talking about also.
Yeah.
When you do more physical AI or agentic AI, you know, so there are at least two. Then there are. I've also talked about this for years, about the three main slices of the cake.
Okay.
The first slice is what is happening now.
Mm-hmm.
which is driven by data center deployed in software. That would be a lot of like even for us, like chip design flows or other flows.
Yeah.
I always believe that, and for years now, that the second slice will be huge, which is Physical AI, which is cars, robots, drones.
Yeah.
The third slice of the cake would be sciences AI-
Mm-hmm.
-which is of course life sciences, material sciences.
Yeah.
In all cases, you know, the three layers are different. In case of the current one, of course, we have a, you know, LLM-based agentic AI at the top level. You know, our basic kinda tools at the middle level.
Mm-hmm.
GPUs at the bottom level, right?
Yeah.
That we can talk more about, you know, we have all these new products for AI. The second part which you asked me, which is more specific to Physical AI. Because Physical AI will be huge, right? I mean, we can talk more about cars, drones, and robots. We are also building a flow on that.
Wow.
There is this sim-to-real gap.
Yeah.
There's more opportunity for simulation.
Okay.
Yeah.
I mean, it sounds like a huge cake because that's three industrial revolutions, one after the other.
Yeah. It's three by three.
Yeah. Yeah. It's phenomenal. First of all, I should have read out a disclaimer, so I will apologize.
Yeah.
Let's imagine we've done this from the top. Today's discussion will contain forward-looking statements, including Cadence's outlook on future businesses and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today's discussion. I apologize, that's on me. If we think about, however, the wider ecosystem in relation to what you've talked about, silicon physics and then AI as the three layers, you know, what is your position, particularly around that physics layer that we just mentioned?
Oh, yeah. I mean, we have a great I mean, first of all, again, the physics, I mean, the ground truth is different in different slices of the cake.
Mm-hmm.
If it is chip design, of course, we have the strongest position. Cadence, you have to remember, has the biggest portfolio in core EDA, core chip design.
Mm-hmm.
You know, digital analog verification packaging.
Yeah.
You are seeing that in the market, right? You know, comparatively, we are doing great. In our core business. Now on the second slice of the cake, you know, Physical AI, we had to do some M&A. The one we did recently was Hexagon.
Yeah.
The reason I did that was in the second slice of the cake, like I was saying, in the second slide, the Physical AI model, the AI model is different. The AI model is no longer a LLM model, it's a world model.
Mm-hmm.
Like W-O-R-D and W-O-R-L-D.
Yeah.
My wife says my Ls are difficult to... It's a world model in the second case. In the world model, you know, there's not enough data on the internet. You know, the LLM model you can train with data in the internet, but in the world model you need to generate data, synthetic data. Either you capture the data by sensors, but that's too difficult, right? It takes too long to do it. You simulate it, but if you simulate it, you have to make sure that it is very accurate. That's called the sim-to-real gap. In that case, you know, Hexagon had the most accurate robotic simulator with Adams. We're gonna put Adams in that loop to improve the accuracy of simulation for Physical AI.
Gotcha.
The third part, which is the silicon-.
Mm-hmm
... is gonna be different because the silicon is more mixed-signal and more low power. You know, the silicon for Physical AI is different than the silicon for data center AI.
Yeah.
Like silicon used in cars and robots is. That is actually also in Cadence's core strength because it's more mixed-signal.
Mm-hmm
We have historically worked with all the big semi companies that make auto chips, right? Or like these kind of embedded chip, mixed-signal chips. Now with OEM players like Tesla and Rivian or BYD that are designing their own chips.
Cool.
The, with the Physical AI, the critical thing in these slices is that all three innovate together, and we wanna make sure we are well positioned for that.
Gotcha. Okay. Very clear.
Mm-hmm.
Maybe take us back to some of the discussions we've heard in.
Mm-hmm
And around this whole conference is the sort of worry that AI volatility is disrupting traditional software business models. Maybe if you could just help us understand how you would stand apart from that disruption and where indeed you would be moving to change your business model or augment it against this volatility.
Yeah. I think the one thing to remember is that I think for us it's not disruption, it is amplification.
Yeah.
You know, the question is how do we monetize that amplification? What AI will do is that it will naturally drive more usage of the middle layer.
Yeah.
The top layer drives more usage of the middle layer.
Mm-hmm.
There are a few things that are different for chip design because what I think what people get worried is if something is 10x more efficient, does it reduce the usage?
Mm-hmm.
Okay. In EDA, you know, going back to 20, 30 years ago, we are 100 times more efficient. It just, you know-
Yeah
... when I was in IBM in the late nineties, we would have 500 people design a CPU in 5 years. It's a real thing, by the way. Okay. Intel, same thing, or DEC Alpha, you know. Now you can have 50 people, sometimes even less, design a CPU in 6 months. It's a 100 times-
Yeah
... more efficient. We have even more usage of our tools.
Yeah.
The reason for that is that the workload is exponential because our customers are designing bigger and bigger chips. That argument only applies if the workload is constant. I don't know, like you are doing something. I don't wanna pick on anybody, but, like, if your workload is not growing, workload is linear to the number of people, for example, you know, then if you are 10x more efficient, you may use 10x lower. In our case, you know, we are doing 3 nm chips now. It'll be 2 nm, then 1.4 nm, then 1 nm, then there will be 3D IC. There's a wide projection. In five years, the chip size will be five-10 times bigger. Complexity will be 20, 30 times bigger.
Yeah.
You need that 10x to keep up because our customers don't want to hire 30 times more engineers. AI will modulate the headcount growth for sure, but instead of 30x, it'll be 2-3x.
Yeah.
The remaining will be with automation and AI. This is the history of chip design industry because of Moore's Law. This is a very different thing.
Yeah.
One part that is different for us is that the cake, the middle layer is critical.
Yeah.
You know, you have more scientific software. Second part that is different is the workload is exponential.
Yeah.
As a result, like, you know, we have customers that will spend months, you know, optimizing things to get few % better power because they're gonna have, like, millions of these devices. As things get more efficient with AI, they run more things.
Mm-hmm.
You know, like NVIDIA, they will run more optimization to improve the GPU further.
Yeah.
More optimization to improve the mobile CPU, or more optimization for the car.
Mm-hmm.
If you look at the license count, I think that is going up nicely. We just have to make sure that we get our value for that. That the way to do that is to demonstrate the value to our customers.
Gotcha. Talking about more optimization and some of the things that are different, you know, a few weeks ago, you did launch the ChipStack.
Yeah
Super Agent.
Mm-hmm.
Maybe just help us understand how did that differ from the GenAI tools that you had out?
Mm-hmm
In recent years, and how could that accelerate the growth for you, in the next couple of years?
Yeah, it's great question. I'm super excited about ChipStack. You know, this is a new product category.
Yeah.
Okay. If you look at, LLM or agentic AI, what is the biggest use case right now? The biggest use case in the general market is coding, right?
Yeah.
You know, CC plus plus Java, you can just talk to it and it write code. Which is great. One issue is. And we use it internally for our coding. You know, we are a software company, so we can use that to become more efficient in R&D. One issue is if you write like 80% of the code is good and 20% is not good, then you spend a lot of time figuring out which 20% is not good. This is one issue with LLMs. If you go to chip design, it's actually the opposite. If you look at our history over 30 years, you know, chip design also has a language. You know, I don't know if you, for those of you did engineering in undergrad or...
You know, there is a RTL is register transfer language or SystemVerilog is the language that, you know, all our customers will define the chip with.
Yeah.
Okay. So far, they manually write that language. They not only write the design manually, they also write a verification plan manually. We have all kinds of tools to verify that it is correct. This is our core business, is that, you know, we have automated the 80%-90% of once you have RTL how to design a chip, because it was so, you know, these things are so complex and expensive, you had to automate that. What we have never done is ability to write RTL or test bench.
Yeah.
Now with ChipStack, we can do that.
Yeah.
Because that is the core engine of a LLM. We have these, you know, we have a new methodology and ChipStack has a new way of doing it using a mental model and knowledge graph. It's much better use of LLMs. We can write the RTL, and then we can write the test bench, because verification is as important.
Yeah
This is an entirely new product category where there was no automation. There's a lot of customer-
Yeah
... to deploy that. To verify that this RTL or test bench is correct, of course, it runs a lot of the middle layer or the base too.
Yeah. Yeah.
We will monetize as a agent plus the use of the base tools.
The optimization here is really just in relation to the test benching as verification in the process flow. Equally, it's a pull-through on the base software layers as well-
Mm-hmm
... on your tool sets.
Mm-hmm.
Pretty clear. Maybe let's jump to IP because it's been something that's been something of a focus. I've heard you earlier today talk about this as being super hot as a category area. Maybe not so much for others in the field, but maybe help us understand what are the dynamics behind the growth in IP with Cadence at this point? Is this supported by recent acquisitions, or is this a moment in time, perhaps driven by SerDes and other standard libraries?
Yes, yes. IP is doing well, and actually this is the, you know, we normally don't talk about it if it is a one-time thing.
Yeah.
You know, we only talk about it now. It's the third year of very good growth we will have. You know, that's our style anyway. We don't want to, you know, say things unless they are fully, you know, verified. I feel very good about IP. We are at third year of very strong growth, and there are multiple reasons for that. One is that our products are better. You know, we have finally have a good team.
Yeah.
You know, we always say team, technology, customers, right? We have a great team finally in IP. Our products are doing pretty well, especially in advanced node TSMC.
Yeah.
You know, which is-
Mm-hmm
... the most exciting part of the market. Our portfolio has grown. That's the second reason, you know, and more on AI HPC side. There, you know, we want to focus on some high-value IPs like HBM. Now that we did acquire from Rambus, that's a great acquisition. DDR is organic, UCI, PCIe.
Yeah
SerDes. I think the portfolio is better. That's the second reason.
Mm-hmm. Okay.
third reason is there are more and more foundries.
Mm-hmm.
You know, like, of course, TSMC is doing remarkable, but there are at least three major advanced node foundries sorry, four, you know, with Intel, Samsung, Rapidus, and TSMC. That's also driving more demand for IP.
Yeah.
Mm-hmm.
Okay. Pretty clear. Maybe just with the chiplet era coming into focus, and I've heard you talking about CoT and hybrid CoT sort of designs coming through, how does that all make a pull on the IP business for you as well?
Yeah, I think that trend is good for both EDA and IP business.
Right.
As customers do more and more of their own chips, they use more EDA tools.
Yeah. Yeah.
Also, you know, because these things are so big and they're moving so fast, right? Every year, every other year, the customers, of course, wants to focus on their key part. If you can buy a standard-based IP, which is good from Cadence, they would rather buy it and focus on the CPU part or the AI part.
Yeah
you know, the auto chip part. I think as long as we can deliver good perform PPA-
Yeah
... the customers will rather buy that.
Yes.
I mean, not all of them, but enough of them want to focus on. Now, some customers will do IP themselves because they think that's a differentiator, but a lot of them will buy it-
Mm-hmm.
because they want to focus on some other part of the design process.
Gotcha. Makes sense. If we just turn to the core EDA business, I think you've guided up something close to 12+% for this year. Last year, you grew about 13%. Clearly in that sort of low teens, moving back into that sort of category. What's the durability on the growth here, and what should we be thinking about as indicators for growth in the next couple of years?
Yeah, I mean, if you look at it, you know, we always look at growth plus margin together.
Mm-hmm.
You know, we have, I think-
Yeah
... world-class margins because that's what our investors want, right? We want to grow at a certain rate, but we want you know, the profitability to be-
Yeah
... better than that, and then we buy back some stocks, so we want EPS to be even better than that. Last year, we grew, like, 14%, and EPS grew around 20%, right?
Yeah.
This formula we have done for several years. If you look at it in Rule of 40 metric, I think we are in the high fifties.
Yeah.
Right? Last few years. I feel good about that, and I think we will definitely. My goal is to crack 60.
Yeah. 60.
Yeah, yeah.
Okay.
So, so-
Just making sure somebody noted that for my team. There we go.
That's a combination of growth, but also, of course, you know, we need to make sure the margin is good and, you know, if you look at our incremental I mean, our margin last year was 45%, but incremental margin was 59%.
Yeah. That's right.
I mean, like, if we add $100 million more in revenue, we added $59 million in profit. That's also making our internal operation more and more efficient-
Yeah
... with AI and things like that. Yeah, we always look at both, but yeah, my goal is to cross 60. Then I, you know, that's, that should be good for our investors also.
Yeah.
Yeah, yeah.
You would have thought. You did mention Hexagon, and congratulations for closing that deal.
Yeah.
Maybe just help us understand how this all fits into systems design and analysis.
Mm-hmm.
When do you think that can make an impact, particularly on the margin side for Cadence?
Yeah. I think You know, any M&A, you know, normally whatever company I mean, Hexagon, the simulation business for Hexagon is a great group, great company. You know, they're one of the-
Yeah
... simulation companies. It was just not ideal in Hexagon because Hexagon is more hardware company than software, and they realized it would be better with Cadence.
Yeah.
Anything we buy is never as profitable as Cadence.
Yeah, yeah.
It takes us about a year or so to get it to better profitability.
Yes.
I think definitely, this year there is some hit. I mean, most of it is not on operating part. Most of it is on financing side.
Mm-hmm.
You know, there is some dilution or there is some debt.
Yeah.
We will take care of that, and next year it should be accretive.
Gotcha.
Yeah.
Okay, makes sense. Maybe just turning to China, we did see pretty decent growth last year, and this was despite, I think, some of the concerns you'd expressed that this could be a strange year.
Mm-hmm.
25 turned out to be 18% growth, and this year's gotten off to a good start. How do you think that'll continue to grow through this year? What are the dynamics that you're looking at in China, certainly by EDA, but also in the IP space as well?
Yeah. China, I mean, did well. You know, it was very turbulent in 2025 and we wanted to be prudent in our guide in the beginning of 2025. I mean, I didn't know that all those things would happen, but there was just a lot of uncertainty, so we want to be more careful in beginning of 2025. This year, I think, I mean, it's difficult to predict, but the environment seems more stable than beginning of 2025. This year we think China will grow and we'll see how much it grows because it's difficult to predict by region. You know-
Yeah
... the growth rate by region is like double derivative, you know?
Yeah, yeah.
We'll see how much it grows, but the environment is good. There's a lot of design activity.
Yeah.
You know, Physical AI is big in China, of course.
Yeah.
Even, you know, a lot of the other parts of the market. I feel good about China right now. Yeah.
Gotcha. No competition from the local guys in that market either?
Oh, there's always some competition.
Sure.
I think. You know, again, you know, we wanna make sure our tools are best in class and, you know, EDA, we have very good position in China.
Yeah.
Hardware, we have very good position, Palladium.
Yeah.
IP, we do less in China historically-
Yeah
... because IP, we are focused more on really advanced node and AI.
Yeah.
In EDA and hardware, yeah, it's good, and it should grow. Yeah.
Gotcha.
Mm-hmm.
Before I move to further questions, I'll just maybe give the floor an opportunity to ask Anirudh directly anything.
Thank you so much. This is really helpful. I'm curious about, you know, the emergence of that inference and these SRAM related chips. Do those all need your EDA tools to design, Anirudh?
Oh, absolutely. Yeah. Any kind of chip you need. I mean, there will be a lot of innovation on the hardware side and software side, you know. Yeah, no, all of these you can't design them by hand. They have to use our tools. They will need Palladium, they will need our EDA software. Yeah. They will need IP.
Awesome.
Yeah. I mean, one thing I want to say, you know, it's very difficult to predict, but, you know, I mentioned this earlier also, that, you know, like some people I talk, some of our customers, they say, "Oh, the inference demand will go up by 1,000x-
Yeah
... in the next five years." Right? That's, that's amazing. Maybe it'll be more than that, right? But then you have to normalize that with the improvements in hardware and software, right? This is from current levels. I think there was that customer or that partner already assumed that the hardware will improve by 10x.
Mm-hmm.
They assume software will improve by another 10x. The actual improvement is 1,000 divided by 100.
Okay.
Which is 10x. By the way, 10x over five years, 60%. Even if that gets modulated by power and other things, maybe it's 30%, you know. What I'm trying to say is that you know this already that it will not be static. The hardware and software will improve dramatically.
Mm-hmm.
Whether it's this new hardware architectures or advanced nodes. Software will also improve, right? I mean, software has already improved a lot. All these new CS algorithms will be applied to AI, right?
Mm-hmm.
Whether it's partitioning, abstraction, you know...
Yeah
latency, also the lower precision. You know, all those things that happened in CS over 30 years will apply to AI. I think it's gonna be very exciting. Now, of course, it's possible the software improves even more, right, than 10X. A lot of times the software improves more than 10X, the demand can go up even more, right? It's like a very exciting, you know, double exponential, but I think it will be great to see all this innovation.
Yeah, makes sense. One area we didn't touch on was hardware. I know it's an area that you've done really well-
Mm-hmm
... paticularly with the launch of Z3 over a year or two ago. Any updates you've got there? It does look as though there's quite decent growth. It's represented well in backlog. By the way, congratulations for your record backlog as well.
Yeah, thank you.
Maybe walk us through that. How does hardware grow this year?
Yeah. Hardware, when we say hardware, I know you know that it's like a full stack. You know, we make our own chips. We also make our own chips to accelerate logic verification. What happens is at the verification level in chip design, there are two kinds of software. One is like a more Boolean, if you remember, like zero one, Boolean logic, you know. Like how a GPU or CPU will work. There's a lot of Boolean logic.
Yeah.
The other part is numerical. You know, like simulation or timing, power, noise. It's more numerical. For numerical, we can accelerate it with CPU and GPU.
Okay.
For Boolean, we build our own custom processor. It's a Boolean supercomputer. It's as complicated as any, you know, processor in the world. When we do that, it runs like 1,000 times faster than standard silicon. We are our own kind of designer using our own products. When we put that together in hardware and software, those things, this is called Palladium, become indispensable to design of modern chips. All the big chips right now are designed by our product. You want to verify. See, this is what happened. In the old days, you would design a chip and then do software development, and it would come out of to production, right? Any CPU or GPU or something.
Mm-hmm.
The issue is that you don't want it to be wrong, because if it is wrong, you have to iterate.
Okay.
That's one problem. Second problem is the customers want to overlap hardware and software development. You don't wanna wait till hardware is, silicon is ready and then start writing software because that takes too long.
Yeah.
The demand from Palladium is driven by these two things.
Mm-hmm.
What happens then is we overlap hardware and software, so the customers are writing software where no silicon exists. That's why they use Palladium to emulate-
Yeah
... the silicon. There are two advantages. One, you can start writing software. You can boot like Android or Windows or iOS, whatever you want. The second is you can make sure that the silicon is correct.
Mm-hmm.
It became like And to do that, you need to run 1,000 times faster because if you run on a regular CPU, it's not gonna be fast enough.
Sure.
This is the reason that Palladium became like indispensable tool for chip design. Now as the chips get bigger, you need more and more Palladium capacity.
Yeah.
Then as there are more and more software, as the system companies start doing silicon, I mean, they are system companies because they have software and hardware, they need to run more Palladiums.
Yeah.
I mean, we have like six years in a row record growth in Palladium.
Yeah.
I think this year will be another record.
Mm-hmm.
We'll see how it goes. Whenever we start the year, we are more prudent in the assumption.
Yeah.
I'm pretty bullish on Palladium this year as well. Yeah.
Gotcha. Maybe with a couple of minutes to go, I have to ask about how you're gonna monetize the agentic EDA.
Mm-hmm.
When we go back to ChipStack, and, you know, John was pretty clear on the callbacks, that this is on a value-based basis.
Mm-hmm.
This will be perhaps based on tokens. Maybe help us understand how does that work, and could this be margin accretive in a couple years' time for the group?
Yeah. I mean, we always want to be margin accretive. You know us, right? Over all these years, every year we're trying to be. This is another big thing, right, that can help us. Again, yeah, I think it will be token-based. There are all these... I mean, we wanna have a base subscription plus tokens on top.
Yeah.
That's how we want. These new tools will be new product categories. As they consume work, you know, they can use tokens. This kind of model in AI-
Yeah
... well established now.
Sure.
We would hope to have a base subscription plus tokens.
Mm-hmm.
That is also gives good visibility to our customers, right?
Yes.
It's good for us.
They can see the meter, basically.
Yeah. Yeah.
Yeah. Makes sense. Looks like we've run down the clock. Anirudh, thank you very much.
Thank you. Thank you for your time.