Cadence Design Systems, Inc. (CDNS)
NASDAQ: CDNS · Real-Time Price · USD
336.54
+3.65 (1.10%)
At close: Apr 27, 2026, 4:00 PM EDT
334.20
-2.34 (-0.70%)
After-hours: Apr 27, 2026, 5:26 PM EDT
← View all transcripts

51st Nasdaq London Investor Conference

Dec 10, 2024

Moderator

Welcome, everyone. It's the 51st Nasdaq Conference here in London. And I've got the honor of welcoming Anirudh Devgan, CEO of Cadence, to the stage. Maybe just before we start, however, I've got a disclaimer to read. Today's discussion will contain forward-looking statements, including Cadence's outlook on future business and operating results. Due to risks and uncertainties, actual results may differ materially from those projected or implied in today's discussion. Right. Anirudh, welcome to London. Thanks for coming along. Maybe just to kick us off here, for those who don't know, can you tell us a little bit about Cadence, and in particular the intelligent system design that's really driven your growth the last few years?

Anirudh Devgan
CEO, Cadence

Yeah, thank you, Lee. Good to be here. Now, I think my third time, right, in London. This is a great conference, so thank you for attending. And also, such a festive atmosphere. I don't know how many people are in London right now, but the streets are packed. So for Cadence, just if you don't know, basically, we provide software, mostly software, hardware, and IP products to design chips and electronic systems. So what we like to say is that almost any chip design in the world today uses some form of Cadence software. And then over the years now, about 45% of our customers are what I would call system companies, and 55% are semi companies. Though there's a merger of system and semi. But system companies would be like car companies or phone companies, and semi would be Broadcom, Qualcomm.

That's the kind of makeup of our customer base. We are in all regions where chips are designed and electronic systems, in all verticals. It's very diversified. It's basically what we say, it's mostly an engineering company. We make engineering software. The software is used to do R&D for our customers. It's engineers designing for engineers. 90% of our headcount is engineer or computer scientists. We have one of the highest investments in R&D in S&P 500. About 35% of our revenue is R&D, and about 15% is application engineering. That's people working with the customers. We have about, I think, pretty good financial profiles, so about 42%-43% operating margin. That leaves like 6%-7% for all the other stuff.

So we have been efficient over the years, and I still think there is room to improve margin as we continue to scale because we are a compounder of value, and we have done this for the last 10 years. So I just stop there. And one other thing, over the last six, seven years, what we have executed, what Lee was saying, is what we call Intelligent System Design. And the view of the world, which you probably know all this anyway, is that we look at the world as three concentric circles. So there's Silicon, System, and Data. And a perfect example of that is an electric car. So you have all the navigation data. Then you have the actual physical car, which is mechanical plus electrical, hardware plus software. And then the silicon that drives that car.

And that's true not just in auto, it's true in all the other verticals, data centers, mobile. And so from a Cadence standpoint, our expertise is computational software. That's computer science plus math. So this is pretty complicated numerical software. So that applied to silicon is EDA, which is chip design software. Applied to system is SDA, which is simulation and system software. And then computational software applied to data is, of course, AI. So our focus the last six, seven years and going forward is EDA, which is chip design software, SDA, system design software, and AI.

Moderator

Great. Maybe we'll come back to some of those levels and the different verticals as well. But let's maybe set the scene by going through again what was said at Q3. It looked like a good set of good print, momentum into Q4. And particularly there, strong bookings expected in Q4. So I wonder if you could frame us out for us how that's happening, what were the key messages coming through?

Anirudh Devgan
CEO, Cadence

Yeah, this year was a little bit unusual shape. Normally, we are very, very predictable because most of our revenue is ratable. Like this year, it's about 82%-83% is ratable recurring revenue, and upfront is about 16%-17%, something like that, but the shape of that was more back-end loaded this year, and so I think Q3 was a good quarter because some investors were concerned, why is it more back-end loaded? There's some of the things we did it to ourselves in terms of we had a new hardware product, which was launched in Q2, so that pushed revenue into the second half, and we had certain IP contracts that kicked in the second half, but I think Q3 was a very strong quarter, and I think it kind of de-risked the whole year, and Q4 is also looking good in terms of booking pipeline.

We have to close all that. There's still a couple of weeks left. But overall, for the year, we should be more than 13% revenue growth and more than 42% margin. So when we look at Rule of 40, like operating margin plus revenue growth, so we have been more than 55 for the last several years. And I expect that hopefully continue going forward. But I think we always look at that. Good growth, which is driven by a lot of thematic things, more and more silicon, more system companies doing silicon, and then good operating margin performance.

Moderator

Very strong with 55% plus there and then another 55%. Maybe if we turn to some of your products and AI-enabled products, in particular Cadence AI, can you maybe help us understand how is the engagement going within customers? And what does the rollout of these products, now across the whole design flow, it looks like? How do these change things in the medium to long term for Cadence?

Anirudh Devgan
CEO, Cadence

Yes, yes. Yeah, of course, AI is a big topic, right? And to me, AI, any new technology has kind of three phases to it. And that's definitely true in AI. And we want to make sure that we participate in all the three phases. And that's the unique thing about Cadence versus any other software company or semiconductor company. So the first phase of AI is infrastructure, which, of course, whether it's LLMs or more importantly, the chips, the GPUs that are used for training and evaluation. And then the second phase of AI is applying AI to our own products. That's going to happen in all companies, definitely. We have been working on that for at least five, six years now. And then the third phase of AI is it will create new markets and new products.

And I think the third phase is always the biggest in any new technology. But with AI, the first two can also be big, especially the first one because of the computational power. So first thing is anybody designing AI chips is a big Cadence customer and partner. So like NVIDIA, we have been working with them for like 20 years. And they are development partners with us with several of our products. And I think they themselves have said that they can't design these things without Cadence. But that's also true, not just NVIDIA, which we have a great partnership, but all the other data center companies. And most of them have publicly announced that they are doing silicon, whether it's Google with TPU or Microsoft or Meta or Amazon, and then other parts of the world.

So one part is in the infrastructure build-out, which is still going to go on for a while. And then the data center build-out is going to extend, as you know, to phones and laptops. And look at Qualcomm, and of course, look at the big phone companies. So all those, we are a central part of their design process, and they use a lot of our products. And all that infrastructure build-out is accelerating as more and more products and at a faster cadence. So that's one benefit of AI to Cadence. Now, the second part, which you mentioned, is applying it to our own products. And of course, AI, one thing with AI is like everybody calls everything AI, so nobody can really figure out what is AI.

So I was talking to one plumbing company, and they said they are doing AI, so this is the problem of talking about AI these days. But from our standpoint, what is really AI is if you look at EDA, we have done this kind of numerical software for like 30 years and a lot of automation. Because some people call automation AI or AI automation. But EDA stands for Electronic Design Automation. So we have done automation for 30 years. And even if you look at chip design in the late 1990s to now, last or early 2000s to now, the chip design is probably 100 times more efficient because of our products and also the foundry ecosystem that has developed. So we have also done automation for a long time. So the question then is, what is new that AI can provide to EDA?

Because a lot of other industries, they just call automation as AI, which is not really, I mean, that can be one definition of AI. But I think the key thing with EDA, so EDA has very complicated software. Of course, everybody thinks what they do is complicated. But in reality, if you look at this kind of mathematical numerical software, it is designing these chips with 100 billion transistors. So all of these have to be designed by software. But in the history of EDA, what we did was we focused on what I would call like a single run. Like you give it some input, and it runs, does a lot of complicated stuff, and gives it an output. And it runs for a few days in some of these kind of implementation platforms, like Innovus, which is used by most of the big companies.

But what the customers want, they don't want it. They don't want a design process is iterative. You don't run it one time. Otherwise, the chip will be designed in two days, right? So typically, our customers would divide the chip into multiple blocks or CPUs, GPUs, camera systems, all those things. And then you run the design, and then you change something, and then you run again, and you change something, you run again, right? This is the design process in anything, right? So EDA never provided this kind of workflow automation. Because you have to carry the knowledge from one run to the next run to the next run to really do workflow automation. And it's not that we didn't want to do that. This is like an obvious thing you should do to optimize the whole workflow rather than a single run.

It's because there was no mathematical way to do it, so to give you an example, like one of our automotive customers, that's a big topic. They're designing a CPU, and it takes like six months maybe, each run in a few days. And then the designer has intuition of, okay, I did this last time, he or she, and this is now a new I'm going to seven nanometers, so this word worked in 10 nanometers, and so they use their intuition to do that, so that's one way to do it. The other way would be, and in that example, they were using like 17 different variables that they're changing to design, so the other classical way, if you remember undergrad or something, is design of experiments. That's a statistical method, so we can automate that as a software provider.

But if you do design of experiments, that kind of design will take 4 million runs to really exhaust the design space. That doesn't make any sense. Each run is two days, 4 million. Now with AI, with reinforcement learning and a lot of these new AI techniques, that can be done in about 200 runs. Because the beautiful thing in AI is that it can create a model of a particular circuit, and then you can use that to traverse the search space. So that's a huge thing. And then a lot of things can be done in parallel. There are some sequential parts, but you can run those 200 runs in parallel on 10 or 20 machines. So what used to take in like one or two days can be done in one or two weeks, I mean, instead of a one run.

But that two weeks is still much shorter than six months that is done in the old way of doing the design process. So that's huge because you say, oh, you are doing in two weeks what would take in like three to six months. But that's only part of the benefit. Because the human nature of design is they will still iterate. Even the AI does it in one or two weeks, the designer will still iterate a few times, but less than before. But the real value of AI in this context is that because this is optimization, it's a generative design process, it can give a better answer than a human can do. Because you're optimizing in 17-dimensional design process. So not that any of the answers are wrong, but if you are mathematically searching a 17-dimensional space, the answer can be better. Not only it's faster.

And I think this is the real value of AI is that the answer sometimes can be better by 10%, 15% power, which is equivalent to one technology node migration. It's huge. I mean, some customers will spend months optimizing for 1% or 2% power. And with AI, you're getting like 10%, 8%, 15% savings. Now, it depends on how good the design was. And then also, most of these big companies have a wide range of design teams. So what we have found is with our AI products, not only it's always better than, but it's always better. Now, it can be slightly better or much better, but better than the best team. But you have a range of teams, so you can uplift the talent of all the teams. So I think this is the main thing with AI.

Because of this knowledge transfer, because of this model-based thing, we can actually optimize the workflow, and the benefit can be significant. Now, what does it do to our kind of customers? So first of all, you can do much bigger chips and all that, which they need to do because we are still in this node migration, right? So we are at 3 nm. We'll go to 2. We'll go to 1.4. We will go to 1. This much the industry can see. So that's at least 10 years of migration. Now, of course, a lot of companies working beyond 1 nm, but at least 10 years we can see. And then each node, the size of the chip effectively doubles. So even if it's the same size chip, you have to double the number of transistors.

So by 2030, and right now the biggest chip, you probably know, is Blackwell, right, from NVIDIA. So they have two chips in a package. So it's 100 billion, 100 billion. So it's 200 billion. By 2030, it's widely expected the chips will be 1 trillion transistors. So of course, those things are 10 times bigger and much harder to design. So the customers themselves need to go higher level of automation, things like AI, to be able to do that. So I think that's one benefit to our customers. The second benefit is that right now we are about 11% of R&D is EDA or software. And 89% is headcount. So we talked to a lot of customers. If the chip gets 10 times bigger and the design complexity gets 30, 40 times bigger, because bigger chips, you have more things, you have software, all that.

There is no way that our customers can hire 30 times more engineers. First of all, they don't want to. Second, there are not that many engineers in the world. But I think the number of engineers will grow maybe 2-3x. Because the problem is an exponentially increasing problem. So even if you do a lot of automation, I think the number of engineers will grow by 2-3x. But there's still this 10x gap in terms of what is needed, what can be achieved. And so that can be done with more automation in AI. So at 2030 from now, and we used to be 7%-8% of R&D has grown to 11%. So the real opportunity is that the customer will spend more on compute and software, and that portion, that 11% can move up versus the overall spending on headcount.

Moderator

Maybe just touching on all that, because that was a lot for us to digest there. But essentially, if you're running a lot of multivariate optimization to raise that bell curve, you're transferring cost efficiencies to your customer, both in saving on headcount, but also time to market. How are we going to see Cadence capture value on this as you move from 11% to maybe 15% of the R&D budget? How much of that is directly attributed to your LLMs or AI play in these toolsets?

Anirudh Devgan
CEO, Cadence

Yeah. So we have the base tool, which is Innovus, which is what we have always done, which is still a very complicated tool. And then we have these AI things then run on top of it. So I think the customer can see with and without AI what is the benefit. And we also ran it internally.

We do our own design also for Palladium, which is our hardware system, so we also saw like 15% better PPA, and our experience is we are very collaborative. We're working with most of our revenue, like 60%-70% of our revenue is coming from top 50 customers, and of course, we have a lot more customers, and these are like the top 50 customers. They change over time a little bit. Some new ones come, like the hyperscalers come, but these are the top companies in the world, we would say, in semi and electronic systems, so what we found, we have a pretty collaborative relationship, so if we deliver value, normally the history is that we do get share in that value, and we have all kinds of business models to work with our customers, and you can see that in our history.

So our goal is to deliver value. And we are anyway essential to the design process. You need to use these tools. And so we price them accordingly. We have different business models, and we see how that progresses. I'll open up the floor in a second, but I just wanted to ask one question on automotive. Clearly, it's a vertical I think you see a lot of opportunity in. You made the acquisition of BETA as well. So you see it in both emulation or systems as well as in the compute power. And a lot of work going on in China, it seems, as well. I wonder if you can help us understand how big could this opportunity be for Cadence? Oh, I think it's going to be huge. I mean, of course, we love the data center opportunity because that's right here and now.

And we call it. I would call it Horizon One, all the AI that change the data center. A lot of people predict the data center opportunity is, from a silicon standpoint, like $200-$400 billion. You can take either of those. But that's a big number because the semiconductor market right now is $600 billion. So the data center could increase it by $200-$400 billion. And that's why all this design activity with our big data center companies. So that is right now, and that is going to continue at least for a few years, if not more. But we are always looking at what is next and what is next after that. And right now, people are a little down on auto because look at some of the revenue or they look at some of the challenges.

But when we work with customers, we are looking at what they're designing now, which will come out like three, four years from now. So right now, I see as much activity in auto like I saw in data center a few years ago. And auto, right now, each car roughly has $400 of semiconductors in each car. And there are about 100 million cars made worldwide every year. So that's about $40 billion of semiconductor content. But if you ask most people, they expect that number to go to anywhere between $2,000 to $4,000. So at $2,000 to $4,000, at 100 million cars, that's $200- $400 billion dollars. That is as big as the data center AI opportunities right now. The other thing is talking about these three phases of AI. So the infrastructure first, and then current products, and then new products.

So the big question anyway is what will be the new products? Because in the end, AI has to generate trillions of dollars of value to justify hundreds of billions of dollars of investment. So the million-dollar question or a trillion-dollar question is what new markets will AI generate? And I'm sure everybody is thinking about that. I do have an opinion on that. Hopefully, I think it will be. And so one thing with AI, of course, we talk about AI can do workflow automation and all that. So that's good. So we will capture some of the value. But to generate trillions of dollars of value, what is AI proven to be good at? Of course, a lot of people say it can do reasoning and it can be super intelligent. Let's hope that's the case.

But what is proven to be good at right now is seeing, vision, and talking, chatting. So talking and seeing, it is proven to be as good as humans, let's say. So to me, the biggest opportunity of applying that, and in the beginning, it's the infrastructure phase, but the real value will be in the verticalization of that, not in the horizontal. It's always the application of because AI is a technology. It's like calculus. It's not that a few years later, you wouldn't say, well, nobody says, "I'm doing calculus now," because it's like everybody's. So it's like in the end, it will be the vertical application of AI. So in my mind, one of the biggest will be what I would call physical AI. Because right now, it is not applied to the real world. It's more in software now, which are great.

So the physical AI has to be the cars will be huge if they can be done in more and more self-driving and more AI, which there are signs. Now, it has been coming for 10 years. I remember when my daughter was little, they said, "Oh, you won't need a driver's license." But it seems to be happening. Look at what Waymo is doing, what Tesla is doing, what all the Chinese. So I think in the next few years, it's highly likely that self-driving or more and more driver assist will happen in cars. But the other thing is that the cars, also there are other autonomous systems, not just cars, which are robots, drones, planes. So I think this is all going to be in physical AI. This is also a huge market.

And if you look right now, the chips that are used in drones are similar to the chips used in cars. Look at Tesla or NVIDIA. And these chips will be much more power constrained, which for robots and cars. So they will be different than data centers. So they have to be custom designed again. And that's what we're seeing right now. Like so many, I just talked to one company who's going to do a special chip for a drone. We talked cars, robots. And that is definitely trillions of dollars of economic value. So to me, the Horizon One is infrastructure or data center. Horizon Two is physical AI, which is cars, drones, planes. And Horizon Three, now these are shifted. Not everything is going to happen together, but we are here for the long run. So we always track what is happening next.

Horizon Three, to me, is science s AI because eventually AI will be applied to do real science. And material science, physical science. But the biggest application will be life sciences, drug discovery, and the whole pharmaceutical patent process, but also. So we also invested a few years ago, about two years ago, in biosimulation and life sciences. That is maybe five, 10 years in the making. It's still lots to do. But you don't want to be too late. You don't want to be too early, but you don't want to be too late. So in my mind, there's infrastructure AI, then physical AI, and then sciences AI. So we want to make sure we are relevant in all these three phases. And auto, especially, but then robots and drones and all, I think will be huge in a few years.

So that's why we did the BETA acquisition. And also all these companies, I just came back from China. And all these car companies are also designing their own chips. So that silicon system data, which is happening in data center, is of course happening in automotive.

Moderator

Fascinating stuff. I did say I'd open it to the floor. Just wonder if there's any burning questions. Oh, mine's gone up. Maybe at the front here or second row. Sorry. You have a new hardware product cycle coming. Yes, yes. Hardware is lumpy. That can be good and bad. Should next year be very good, I guess?

Anirudh Devgan
CEO, Cadence

Well, that's a good question. I mean, I expect, I mean, hardware, I've been doing this for some time. And hardware has been record year after record year maybe for the last five years.

We are not talking too much about 2025 because we want to see how 2024 ends up. We still have to finish our Q4. Even though pipeline is strong, you have to convert the pipeline. If history is any indication, 2025 should be good for hardware. Because normally the first year, there's a lot of transition. We launched the hardware product in April. First year, you have to ramp up production and all that. Definitely second, third year have been historically very good. We are comparatively in a very, very strong position. Just for folks who may not know all the details, so we are the only company that designs our own chips to make these hardware systems. These are pretty complicated systems.

So for example, when NVIDIA and Jensen talks about that you use a supercomputer to design the supercomputer, he's talking about Palladium and Cadence supercomputers, so these things, like for example, Blackwell is designed on eight racks of Z2. Palladium Z2 is our last generation system because we have to be ahead of the market, so that eight racks of Z2 designed Blackwell, which is 200 billion transistors, and Z3, we can now do 16 racks of Z3, so it can design one trillion transistor systems, which the industry will reach by 2030, and by then, we'll have another play. So these things are, and these are designed, the chips is designed by us, made by TSMC, and we can have each rack has, we don't disclose exactly, but more than 100 chips, all liquid cooled, and 16 of them connected.

So, there's like 2,000 chips, all liquid cooled, the biggest chips TSMC makes that emulates a physical, the chip that has not been created, 1,000 times faster than x86. So what customers will use is they will use Palladium, which is a hardware platform, to basically emulate a chip. They will emulate like NVIDIA, for example, because they're publicly talked about. They will emulate Blackwell, and they will boot all the software, verify that it works, even before you have Blackwell or a phone chip or a car chip. So we have at least, I mean, we have a huge competitive advantage, and we are winning even in Q3, Q4 because the product was launched in Q2. We are doing very well with all the big customers.

And all the AI chips by nature are very, very big and require these kind of because software is a big part of bringing up any of these systems. So while the customers are designing the hardware, they will use Palladium to emulate the chip and then develop software in parallel. And then the demand for hardware goes up with the size of the chip. So if the chip gets bigger, you need more hardware. So long story short, I mean, I am pretty optimistic of our hardware positioning and going into the next few years.

Moderator

Anirudh, thanks so much.

Powered by