Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

Wells Fargo's 9th Annual TMT Summit

Nov 18, 2025

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Let me go ahead and get started. Thank you all for joining us this morning. And those of you on the webcast, I'm Aaron Rakers. I'm the semiconductors and IT hardware analyst here at Wells Fargo. Pleased to kick off my meetings today with the management team from Rambus. We've got Luc Seraphin, the President and CEO, and then Des—uh, Desmond Lynch, the Senior Vice President and CFO. We've been following the Rambus story for quite some time. Extremely excited to have you guys participate in the conference. Clearly you're a play on the memory side of the enterprise market. But maybe for the audience, those on a webcast, maybe, maybe why don't we level set first and maybe give us a little overview of the company, where, what role you play. And then certainly I'll get into some, you know, growth drivers and stuff like that.

Maybe at a high level, let's just, you know, start there, talk a little bit about Rambus, and we'll go from there.

Luc Seraphin
President and CEO, Rambus

Hey, thanks, Aaron. We go to market in three different ways, and we still have three different types of businesses. We have a patent license business, which is focused on memory interface technologies mostly. That business is about $210 million a year. That is a very stable business, long-term business, because we have long-term contracts and very high-margin business, which fuels cash into the company, allows us to develop solutions for the, for technology for our customers. The second part of our business is a silicon IP business where we develop pieces of silicon, that we sell to semiconductor companies, and these semiconductor companies integrate those IP into their solutions, you know, be it TPUs, you know, accelerators, and these types of things. That business is about $120 million, was $120 million last year, and grows about 10%-15% a year.

This is divided approximately 50/50 between IP that we provide for security solutions, which become very important in the market, and high-speed controller solutions for memories, PCIe, and this type of technology. That is the second part of our business. The third part of our business is a product business, a traditional, you know, semiconductor, fabless semiconductor model. That business is dedicated to the data center. This is a business that, you know, was last year about $240 million. That is a business, if we take the midpoint of our guidance for Q4 this year, is poised to grow to about 40% year- over- year.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

That's a growth engine for us. You know, these three businesses really address the memory subsystems, which are becoming, you know, extremely critical to the performance of, you know, servers, whether it's in traditional servers or in AI servers.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep. That was a great overview. I'm gonna double-click on some of that. You know, you mentioned the 10%-20% or 10%-15% growth in the $120 million. Can we double-click on the growth drivers of the product, the chipset business? Maybe I think you've, in the past, given some TAM expectations. How fast is 40%'s remarkable growth, but, you know, as we look out going forward, how do we think about that growth profile of that chipset business, given its importance?

Luc Seraphin
President and CEO, Rambus

The chipset business, you know, our current business is based on a chip that is called an RCD, which is an interface chip between processors and memory. That business has been growing quite nicely for us. We grew to a position where we have about 40% market share today. You know, in the DDR4 generation of products, we were closer to 25%. Very nice growth when the market moved from DDR4 to DDR5. The TAM for the RCD chip is about $800 million today. You know, DDR5 generation of products, what you have on the memory modules, in addition to this RCD chip, we have what we call companion chip. These companion chips add an additional TAM of about $600 million. $300 million of that is a power management chip, and $300 million, the rest of the companion chip.

There is an improvement of the solutions that we're going to provide to the market with a solution called MRDIMM. This MRDIMM adds an additional $600 million of TAM to what we just discussed before. These technologies are also waterfalling into the client space. The client solutions are going to require similar solutions for very high-speed client solutions, and that adds another $200 million approximately. If you add all of that, that's a market of opportunity of about $2 billion.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

$2 billion, yeah. And the growth of that algorithm, you know, those are TAM numbers as of today.

Luc Seraphin
President and CEO, Rambus

Yep.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

If we do, we think that grows 15%-20%. How do you think about the growth of those opportunities?

Luc Seraphin
President and CEO, Rambus

One of the proxies we use for growth is the growth of the server market, you know, which, you know, this year is gonna grow, you know, mid to high single digit. You know, the server market, you know, in the server market, the traditional servers tend to grow, you know, mid-single digit.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

You know, maybe higher than that. The AI servers are growing at a faster rate, but they represent a smaller portion. That's the first proxy we use, you know, this server market growth. There are other growth drivers that can accelerate, you know, this mid to high single digit. One is the number of channels that each one of the pro memory channels that each one of the processors can handle. As Intel, AMD, and others roll out their platforms, these platforms present more memory channels every time they have a new generation. They do this to be able to address more capacity.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

Every time there is more channel, there is more opportunity for our chipsets. You know, that is the way we look at it. That adds to the growth that we talked about of mid-single digit. Finally, the MRDIMM solution that we can talk about a bit later provides a content increase, you know, which.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Mm-hmm.

Luc Seraphin
President and CEO, Rambus

Which also is an accelerator for growth. We certainly see, you know, our growth in that business, you know, exceeding the traditional server market growth.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

You've touched on a lot of all my questions here with all of.

Luc Seraphin
President and CEO, Rambus

Oh, okay.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

We're gonna keep double-clicking a little bit.

Luc Seraphin
President and CEO, Rambus

Yeah.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

The way I think about what Rambus's role is, is that as server performance or as core count expands, the memory subsystem, the complex, gets more and more complex, you know, more and more opportunity for you guys. Today, the lion's share of your business is in RCDs. Can you talk a little bit about the success you've seen in PMIC, where we're at in that RAM phase? Do you think you get a 40% share there like you have in DDR5 RCDs? Just help us walk through the progression of where we're today and how we think about that, let's say, over the next, you know, 12, 24 months.

Luc Seraphin
President and CEO, Rambus

Sure. Where you started is the increased complexity of the memory subsystems.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

What's happening in the market is that the processors from Intel, AMD, and others, they come out with more cores, so more processing power. You know, when you have more cores, you need to have more memory to, you know, to deal with the algorithms that develop some of those cores. Memory technology traditionally has evolved at a lower pace than processor technology. The memory subsystem has always become critical in the sense that you have to optimize that memory subsystem to be able to provide enough data fast enough to the processors to perform. That has been the whole, you know, role of what we do.

When the market went from the DDR4 generation of memory to the DDR5 generation of memory, the industry has decided to move some functions from the motherboard or the data, or the CPU to the memory module.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

In the DDR4 generation of products, we had one RCD chip. In the DDR5 generation of products, we have an RCD chip, a PMIC chip, as you're saying, and a couple of other chips. It adds value to that dollar value to the content we provide. The PMIC is becoming the power management chip, the role of the power management chip is to deliver power to the module in a very stable way, and in a very efficient way. The PMIC is becoming the next most critical chip on that module after the RCD. We introduced PMIC early last year, the first generation of PMIC, but we've invested in PMIC for several years now as we wanted to own that technology internally.

We're starting to ramp, you know, those PMICs into the market. We do see, you know, we start to see revenue contribution from these PMICs, and we continue to see growth quarter- over- quarter. It was important for us to manage the transition of the RCD first from DDR4 to DDR5. That transition was critical. That's when we went from 25% share to 40% share. That's the most critical item. After that, we introduced our PMICs and the other companion chips, and they contribute to our revenue growth going forward.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

And I think last quarter you said, you know, mid-single digit, you know, that was up from low single digit was the PMIC contribution. So again, we're really early, you know, still today in that PMIC opportunity. Do you think, you know, we should think about that, you know, successfully ramping over these next couple of quarters? That's the line of sight you see?

Luc Seraphin
President and CEO, Rambus

Yes. You're correct. We do see momentum. You know, Q2 we were low single digit. You know, Q3 mid-single digit. We expect Q4 to be high single digit as a percentage of our product revenue. We see this nice ramp. I would say the natural introduction for us is Gen 3 DDR5 because of when we started to develop those products. Gen 3 DDR5 is starting to ramp in the market now. We expect to see continued growth, you know, over 2026 on those PMIC solutions.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Perfect. Maybe, maybe I should've asked this earlier. I mean, your competitive landscape, you know, can you remind us real quickly, you guys take 40% market share? Who are the others? Who are you seeing competitively?

Luc Seraphin
President and CEO, Rambus

On the RCD chip, the traditional competitors, we have two competitors, you know.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

Renesas and Montage. You know, years back, there were many more players, but these chips are harder and harder to make and to design. Now the market's been reduced to three vendors: Renesas and Montage. They continue to be our, you know, traditional competitors on the RCD side. On the PMIC side, the landscape is slightly different. You know, Renesas is a player on the PMIC side. MPS is a player on the PMIC side. Montage does not play on the PMIC side. That's a slightly different landscape. On the other companion chips, you know, we see the same traditional.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

Renesas and Montage.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

That's perfect. There's never a shortage of acronyms and technology transitions to think about, and especially that seems to be very increasingly prevalent in memory. Early on when I had covered you guys, there was a lot of debate around HBM, you know, your role in HBM, so maybe we can, you know, talk about that a little bit. You know, also talk about, like, you mentioned MRDIMMs. There's SOCAMM 2s. There are things coming down that are JEDEC standards, which is great for you. Talk about, you know, where you're at on HBM, what role you play, but more importantly, it's not just HBM, right? It's the other pieces of the hierarchy of memory that you participate in, in these AI servers.

Luc Seraphin
President and CEO, Rambus

Sure. I'll start with the traditional server. You know, in a traditional server, it's quite simple. You have a processor, and you have memory modules. And on these memory modules, you have the types of chips we've just talked about.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

If you take an AI server, in addition to this traditional server architecture, you have, you know, HBM attached to GPUs. This appeared when AI started to be, you know, talked about a couple of years ago. If you take an AI server, both coexist. You do have GPUs and HBM memory, and you do also have, you know, a traditional server with traditional memory. Typically, you know, in an AI server, you have more traditional, I would say, DRAM memory than you have in the standard server. In a standard server, you may have 1 TB of memory. In an AI server, you have 2 TB-4 TB of memory.

I think one of the confusions that happened in the market is when AI started, and there was this whole talk about HBM, you know, a lot of people initially thought that would be at the expense of traditional DRAM, but it's actually not. They coexist. You cannot grow HBM in an AI server if you don't grow, you know, at the same time the traditional DRAM on modules.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

AI has actually been a catalyst for us, especially it's been a catalyst in the market for the adoption of DDR5. If you look at an AI server, the need for DDR5 is stronger than a traditional server because to prepare the data for these GPUs and HBM memory, you actually need a lot of processing power in a traditional server, and that could not be made with DDR4. This AI server was a catalyst for the adoption of DDR5 in general. It's also an area where you typically need more capacity and more bandwidth. They were always calling for the latest generation of DDR5 platforms, and they were always calling for the maximum use of number of channels and the maximum use of modules per channel.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

This has been a good thing for us. That's one thing we always want to clarify, is that AI has been actually a catalyst, you know, for growth for us. The second thing on the HBM, you know, HBM memory does not need the equivalent of an RCD chip. That just does not exist, not the architecture. What we have as part of our portfolio, we talked earlier about our silicon IP portfolio. We develop HBM IP, and we do sell HBM IP on the license to people who develop GPUs or people who develop accelerators who need to have an HBM interface. They can buy that HBM interface from us and integrate them into their chips. That's the other play we have in HBM. Typically, we've been at the forefront of performance on HBM.

We've been recognized for that, the HBM controllers that people integrate into their, their solution. I think you talked about SOCAMM.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

It's an exciting thing. You know, some people in the industry looked at LPDDR, which is low-power DDR, which is typically used in, you know, mobile phones or client systems, and said, "You know, why do they use LPDDR? Because it's low power. But why don't we actually use or benefit from this low power into servers?" That was the attempt. LPDDR, although it has lower power, brings other challenges, in particular, you know, that have to do with reliability, with capacity, and these type of things. The first attempt was not really successful. The good news for us is that this architecture of SOCAMM has been brought to JEDEC. JEDEC is the body that defines all of these solutions.

You know, at Rambus, we take pride in the fact that we have chip solutions for every JEDEC-defined, you know, module, whether it's on the client space or on the server space. You know, the very fact that SOCAMM, you know, definition is coming to JEDEC is good news for us. We're part of JEDEC, and I think it will force the industry to find technical solutions to these challenges. You know, if SOCAMM ramps up, we're gonna be part of that.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yeah. Just to be clear, SOCAMM 2, you were making a point, SOCAMM 1, which is more.

Luc Seraphin
President and CEO, Rambus

Yep.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

An NVIDIA-designed.

Luc Seraphin
President and CEO, Rambus

Yep.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

You know, architecture.

Luc Seraphin
President and CEO, Rambus

Yep.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Pushing that to JEDEC opens up opportunity. That is on signal integrity attributes of that SOCAMM module. That's, what is that content-wise for you guys? It's not an RCD.

Luc Seraphin
President and CEO, Rambus

No, it's a different architecture. First of all, SOCAMM is a different, you know, form factor.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

You know, one of the challenges with LPDDR is that you have to have your memory very, very close to the processor. It's gonna bring signal integrity challenges and power integrity challenges, which is what we focus the company on. We're pleased that that is coming to JEDEC.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

You know, the initial content that we see is certainly there's gonna be a controller chip, you know, that already exists, like an SPD hub that already exists. There's gonna be some opportunities for power management because power management has to be taken care of there. There's no equivalent today of an RCD chip or a signal integrity chip on the current view of SOCAMM. I think these technologies are going to evolve over time.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Perfect.

Luc Seraphin
President and CEO, Rambus

We will need to have, you know, some sort of chip that deals with signal integrity. We will continue to have power integrity chip and this controller chip. Today, the SPD hub is a chip that can be used in the SOCAMM or SOCAMM 2.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

That's perfect. Now I wanna jump, maybe before I go to this next architecture question, you know, maybe on the traditional business, maybe two questions. How do you think about, like, I saw an article, a private company introduced a, I think it was a thousand-plus core CPU last week. You know, the core counts are going to continue to expand. How do you think about we're going from 8 channels to 12 channels? Does 12 go to 16? Does 16 go to, I don't know, pick your number, 24? Because that is clearly, you mentioned mid-single digit, high single digit server unit growth. That's underneath of that, right? That's an additive driver to you. Where does channel count go? And then that's gonna segue us into, like, MRDIMMs.

Luc Seraphin
President and CEO, Rambus

Yeah, that's correct. I think the first proxy for growth is the growth of the server market. And then each, you know, CPU, you know, over time provides an increasing number of channels. In DDR4, you know, traditionally, you had, you know, 8 channels. When the market transitioned from DDR4 to DDR5, you know, Intel stayed on 8 channels, and AMD provided 12 channels. What it means is that you, you know, 12 channels, 1 module per channel, that means you have 12 RCD chips. This is as simple as that. If you put 2 per channel, then it's 24 chips. Yeah. AMD moved from 8 to 12, Intel stayed at 8. The current generation of Intel ramping in the market now is at 12. That's an additional growth for us.

You know, they converge into 12. You know, both are announcing the next generation product with 16 channels. And that's a good thing. That's another, you know, growth driver. For us, it means a gradual growth, potential.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

You know, from 1 at 12 to being 2 at 12 now, 2 moving to 16 is really, really good. There are talks about 20 channels, but then you start to reach, you know, some limitations in terms of implementation. You know, the space that you have in a server to actually have 20 channels, the signal integrity challenges because, you know, the more channels you put, the further away they're in the pro from, from the processor. So the more complex the signal integrity problems are. But that's where it's going. You know, from 8, 12 to 12, 12 to 16, 16, you know, I'm talking Intel AMD.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

To, you know, talks about 20, but.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

And again, it, it's dual channel support, dual modules per channel. But on average, what would you typically see on a server? Is it 1.2, 1.5?

Luc Seraphin
President and CEO, Rambus

Yeah, that's the good way to look at it. One point something, 1.5 , they're a little bit more than 1.5 . The reason is it really depends on the workload. You know, if you're, there's a little bit of a compromise here. If you're really looking for very high speed, you typically have one module per channel. And that's typically what we see, you know, in traditional servers. You know, you want speed, you're a very high, you know, access to memory, you have one module per channel. If you go to AI servers, they have more need for capacity than they have for speed. So they typically populate with two modules per channel for capacity. It's a bit at the expense of speed.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Mm-hmm.

Luc Seraphin
President and CEO, Rambus

It is workload dependent. That is why we are saying it is not that everyone is going to go to two modules per channel. Some are going to go for speed with one module per channel. Others are going to go for capacity with two modules per channel. It is really dependent on the workload. Yes, the way we look at it is that is slightly above 1.5.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Okay.

Luc Seraphin
President and CEO, Rambus

Is the average.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

That's very helpful. I'll go back and think about my bottoms-up analysis.

Luc Seraphin
President and CEO, Rambus

Yep.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

All the math around that. MRDIMMs, I think we've talked in the past, you know, upwards of a 4x content increase on a chipset basis. You know, just remind us again what we should be looking for in terms of seeing MRDIMMs become a material, you know, potential incremental driver for the company.

Luc Seraphin
President and CEO, Rambus

Yeah, so MRDIMM is a great, in my opinion, it's a great.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Maybe we start by explaining what, what are MRDIMMs?

Luc Seraphin
President and CEO, Rambus

That's right, to explain what MRDIMM is. You know, in a server, we just keep talking about, you know, having a processor, and then you have these memory modules, which we call DIMMs.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

You know, that's how they've been called. And we also talked about the fact that, you know, processor technology evolves much faster than memory technology. Typically, processors have the ability to access memory on their bus much faster than the memory is able to take it. The idea of an MRDIMM is, why don't we replace a DIMM with what we call an MRDIMM? On an MRDIMM, instead of having one side of the DIMM populated with memory, you have the two sides of the DIMM populated with memory, and you multiplex access between the two sides. Basically, with a lower speed memory, you can use the full speed of the processor bus.

What it means is that you remove a DIMM from your system, you replace it by an MRDIMM, and all of a sudden, you have double the capacity because you have memory on both sides of the DIMM, but also double the bandwidth, double the speed because you multiplex access between the two banks of memory. We were the first one to announce a product about a year ago that is JEDEC compliant. It's supported by JEDEC, which is this industry standard, which is really, really good because the whole industry is going behind it. This will intercept the market in next-generation platforms from Intel and AMD, Diamond Rapids and Venice.

That are going, you know, hopefully going to be introduced, you know, to the market to ramp, you know, in 2026, you know, ramping into 2027. The good thing is because you have this MRDIMM with double capacity and double bandwidth, the content on the module itself is much higher for us. In the standard DIMM that we talked about earlier, you have the RCD, the PMIC, two temperature sensors, and an SPD hub. These are the companion chips that we talked about. If you look at an MRDIMM, you have a more complex RCD because you have much more complex memory, more memory.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

You have a more complex PMIC. When I say more complex, it means slightly higher ASP.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Mm-hmm.

Luc Seraphin
President and CEO, Rambus

You still have an SPD hub, and you still have two temperature sensors. You have to add 10 additional chips, which we call data buffers, which are here for the, you know, cleaning the signals from the memory because of this multiplexing. You know, overall, when you replace a DIMM by an MRDIMM, the dollar content is multiplied by 4.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

I've always said 8 going to 30. Easy way to.

Luc Seraphin
President and CEO, Rambus

That's an easy way to look at it.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

Again, it depends on where the pricing is gonna be at that point in time, but yeah.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

I think I know the answer to this. The target would be to attain a 40%, or do you see a higher opportunity from a share perspective on MRDIMMs? Your first-to-market.

Luc Seraphin
President and CEO, Rambus

Yeah, we first-to-market, you know, it's always our goal to get to 40%-50%, you know, share when we first-to-market. You know, the ramp is really going to depend on when and how fast the platforms from Intel and AMD are going to ramp. This is always a dependency that we have. You know, the whole ecosystem has to be ready for these things to ramp in the market. One thing I would add to the MRDIMM discussion is that if you take an MRDIMM, it becomes a very complex system. Given the density of memory that you have here and the speed that you have to deal with, the interoperability of all these chips is becoming very, very important. We see this already in the standard DIMM.

We spend a lot of time with our memory customers or with Intel and AMD making sure that all the chips work well together. You know, the more suppliers you have for this, the more complex it's becoming. When you move to MRDIMM, it's gonna be even more complex. What we hear from our customers is that they are looking for a one-stop shop. They're looking for people who are able to look at this at the system level.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Mm-hmm.

Luc Seraphin
President and CEO, Rambus

Who can make sure that the RCD works well with the PMIC, works well with the controllers and the DBs. We believe that, you know, when the time comes for MRDIMM to ramp in the market, we will have the ability to sell the whole chipset. It's gonna be much easier for our customers than, you know, what they traditionally do or did in the past, which is to mix and match between suppliers. I think that having the whole chipset is gonna be a critical success factor for us.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep. That's a great overview. I'll pause to see if there's any questions from the audience. If not, I'll keep going. Why don't I keep going? Des, when we talked about all the various different growth drivers, you know, can you walk us through how we should think about the rest of the P&L, right? How do we think about margins? How do you think about operating margins? Just walk us down, you know, the P&L, you know, from a financial perspective.

Desmond Lynch
Senior Vice President and CFO, Rambus

Yeah, we've been very pleased with the top-line growth as Luc outlined, from the start. Really, the performance on the chip side has been excellent, 40% growth this year, if assuming the midpoint of our guidance. When you look at the gross margin profile of the business, patents is 100% gross margin, our silicon IP is operating around 95% gross margin. And then you have our chip business, which is a long-term target of 60%-65%. We've been operating around 61%-63% margin range here in the last sort of three years, really driven by our disciplined approach to pricing as well as our ability to drive continued manufacturing cost savings.

Going forward, as the chip business continues to be the highest growth area within the company and now representing over 50% of our total company revenue, what you will see is some mixed changes on the gross margin side from there. What we will see is some offsets from an OpEx leverage perspective, which will enable us really to drive towards 40%-45% operating income with really strong cash generation from there. We are really pleased with our business performance. There will be some changes to the middle part of the P&L, which is entirely driven by revenue mix, but we still have that strong commitment of driving towards strong operating profit and cash generation from there, Aaron.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

That's perfect. I guess I have to ask this because, you know, the euphoria around memory is really high right now. Like, memory's all of a sudden became, people are very aware of how the role it plays in AI, and let alone we're seeing actually some pretty good server demand, you know, indicators in the market. I know AMD had a presentation last week, with some fairly robust, you know, just x86 server, you know, CPU growth expectations. How do you judge inventory? How do you think about, like, customers' inventory levels? How do you manage that? Because I can imagine the risk that people might see is, "Hey, in this environment, pricing's going up. You guys don't participate in pricing." but is there pull ahead or demand, you know, pockets that get created because of that stuff?

How, just walk me through your thoughts there.

Desmond Lynch
Senior Vice President and CFO, Rambus

It's something that we continue to monitor very carefully, with our customers. If you went back to sort of the DDR4 cycle, there was an overbuying as a result of sort of COVID at that sort of time, from there. What I would say is that our customers have been really cautious with regards to their inventory sort of level, from there. What I would also say is that the sub-generations of DDR5, coming into the market, that's on a fast, development pace just now. It's been every 12 months through the first three generations of DDR5. Our customers have not really been wanting to stockpile that sort of inventory from there. I would say it's been at reasonable sort of levels, from there. It is something we continue to watch.

As a company, we have a strong balance sheet, strong cash generation. We're very comfortable to hold some of that inventory on our own balance sheet to support our customers' ramp sort of going forward. I would say that we've not seen any evidence of any stockpiling of inventory at customers.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

And just to be clear, your chips are sold to the main DRAM suppliers that make the module, right? So you're not.

Desmond Lynch
Senior Vice President and CFO, Rambus

That's correct.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yeah. Okay. Luc, I'm gonna bounce back to you. It's not widely talked about, but I think it is something that we should consider, you know, maybe looking forward is the client side, right? You guys clearly have, you know, the majority of your business driven by enterprise, but you do have, you get the CKD, right, for the higher-end client markets. Can you talk about that opportunity? I think you mentioned earlier maybe there's $200 million or so TAM opportunity. When does that show up?

Luc Seraphin
President and CEO, Rambus

Yeah, sure. You know, what we recognize quite early is that some of the challenges that you have in the data center in terms of signal integrity and power integrity because of the performance of these systems, we're gonna find some of these challenges on the client space. When the speeds on the client space keep going up, then, you know, some of the solutions that we had to develop for the data center will apply to the client space for signal integrity and power integrity. You start to see that need when speeds go, you know, over 6.4 gigatransfers per second. And 6.4 gigatransfers per second, you know, that relates to, for example, the Arrow Lake platform from, you know, from Intel, for the highest speed of these platforms.

It's still a very narrow, you know, market segment, but it's the first step into the client market. As the client market develops over time, we're gonna see the speeds increasing and the need for these solutions to continue to appear. 6.4 is kind of the number to remember. Arrow Lake is when we're going to intercept with the CKD for the first time. The next platform, Panther Lake, is gonna be at 7.2. You have two layers, 7.2 and 6.4. As the market grows over time, you know, we're gonna see these things develop. It's a stepping stone into that market. We believe that in the long run, you know, every one of these client systems is gonna need some sort of these solutions.

This year, we have introduced power management solutions for the client space as well. It's gonna go slowest to $100 million, right? TAM, as we talked about, you know, earlier. That's an area where we will see for the long run quite, quite some nice momentum.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

In the minute we have left or so, you mentioned earlier the P&L generates a great amount of free cash flow for the company. You can see that in the fundamentals. How do you think about capital return? You know, are there tuck-in acquisitions that you guys think about? Are there areas that you can further adjacently expand the opportunity set for Rambus?

Desmond Lynch
Senior Vice President and CFO, Rambus

Yeah, I would say in terms of capital allocation, we have a robust balance sheet with a continued strong cash generation, which has enabled us to operate a consistent approach to capital allocation across a number of years here. It's really built around three pillars: organic investment, inorganic investment, and capital return to shareholders. Organically, as Luc sort of laid out, we've been really spending in the right areas, which has increased our market opportunity. We're seeing that flowing through to the revenue performance from there. Very pleased organically what we've done with the company. Inorganically, we continue to have a strong funnel of M&A targets. I would say over the last five years, we've made smaller tuck-in acquisitions, which have really been targeted towards our silicon IP business, which has enabled that business to get to scale, as Luc touched upon earlier.

We'll continue to be disciplined in our approach here, strategically, financially, and operationally. Any M&A has to make sense for Rambus from there. Lastly, just on the capital return side, we have a strong track record there of buying back shares. I think it's been 44% free cash flow since 2021. We have that demonstrated track record there, Aaron.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Perfect. I'm gonna end on this. Is there anything we should have, I should have asked you that I didn't ask you? Or did we touch on most everything?

Luc Seraphin
President and CEO, Rambus

No, the only thing I would add is that, you know, with these three legs to our business, patent licensing, we see long-term evolution of the memory market.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Yep.

Luc Seraphin
President and CEO, Rambus

As we develop things for the next 10 years. The silicon IP gives us insight into the semiconductor market because of the breadth and the type of customers we have. And then we have the product business. So we really have a very nice view as to where the memory subsystem market is going.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Perfect. We'll end it there. Thank you so much for joining us.

Luc Seraphin
President and CEO, Rambus

Thank you.

Desmond Lynch
Senior Vice President and CFO, Rambus

Thank you.

Aaron Rakers
Semiconductors and IT Hardware Analyst, Wells Fargo

Thank you.

Powered by