Good morning. My name is Duksan Jang. I'm from the U.S. Semis and Semis Equipment team here at Bank of America. I'm very delighted to host, Luc Seraphin, Chief Executive Officer, Desmond Lynch, Chief Financial Officer from the Rambus team. Welcome.
Thank you.
Thank you. Did you have any very interesting comments you had to make about the safe harbor?
Yeah. So as usual, you know, we encourage people to understand our safe harbor about our forward-looking statements and, you know, to read also all the, you know, official statements we have on the 10-Q and 10-Ks, where you can find a lot of interesting information as well about the company. This being said, then we can move into other questions.
Sounds good. So just for those unfamiliar, would you mind starting off with a brief company overview, your history, how you got here, your main growth drivers?
Sure. So Rambus was created about 30 years ago, and it was created on the basis of developing the foundational technology used for memory interfacing. It's as simple as that. So every processor or every memory actually uses that fundamental technology that we have developed. We started the business as a patent licensing business 30 years ago, but over time, we have evolved our business model. We continue to have a solid patent licensing activity as part of our business, but over time, we have developed memory interfaces and high-speed interfaces based on that foundational technology. That's the silicon IP business, where we sell silicon IP to semiconductor companies. Those semiconductor companies integrate those blocks into their SoCs and then sell those SoCs to their customers.
That silicon IP business represents today about $100 million, more than $100 million of revenue, when the patent licensing is about $200 million. Then we evolved into actually developing chips that we sell to the memory vendors, mostly, Samsung, SK hynix and Micron. They integrate those chips onto their memory modules, and they sell those memory modules to the hyperscalers and the enterprise customers. That chip business is the fastest growth business for us. We started, you know, very small. In 2018, it was $38 million. Last year, it was about $225 million. So very high growth in that business. So that's who we are. We focus on data center, mostly.
We focus on memory interface and security, which we can talk about a bit later. We go to market in three ways: patent licensing, silicon IP, where we sell predefined IP to semiconductor companies, and chips that we sell to the memory module vendors.
Sounds good. We can focus on the chips business, first. So this is your newest of your efforts, especially a lot of focus on the interface chips as well. Can you give us a sense of how large the market is today, your positioning and potentially your advantages over competitors?
Sure. So we sell those chips to the memory vendors, and these memory vendors integrate those chips on their DRAM modules. So that's what we do. The market for those chips last year was about $750 million-$770 million in size. That is a market last year that you know declined over the prior year. And one of the reasons was you know a DDR4 inventory digestion in the market, so inventory digestion of prior generation, but also some CapEx redirection you know to AI servers. So the market went down last year and it was about $750 million in size. We expect the market to rebound this year you know mid- to high single-digit growth.
And our position in the market is, you know, we were in the DDR4 generation, you know, above 20% market share. When the market is moving to DDR5, our market share in DDR5 is around 40% share today. So, that transition of the market from DDR4 memory to DDR5 memory has been a catalyst for us to grow, you know, our share, you know, in that market.
Mm-hmm. Alongside that discussion of the transition, how far along are we in that DDR4, DDR5 today?
So we're far along now. I think the DDR4 launch in the market took time. There are many reasons for that. This is a very small ecosystem in terms of number of key players, but very large in terms of size. So when the whole industry has to move from one generation to the next, all memory vendors have to be ready with DDR5. You know, all processor vendors have to be ready with their DDR5-enabled processors, and all the buffer chip vendors like us have to be ready at the same time, and everyone has to align for the market to transition. So that transition took time, but it has transitioned in earnest now. We are well into it. You know, we are shipping three generations of DDR5 at different stages of qualification or production to our customers.
And we're still working, you know, on the end of the inventory digestion on DDR4. So that transition is well, you know, well engaged at this point in time.
Mm-hmm. Could you talk more about the unit and content opportunities for three generations of DDR5 products?
In terms of content, when the industry moved from DDR4, DDR4 modules to DDR5 modules, they also changed the architecture of that module. One of the advantages that DDR5 brings is more capacity and more bandwidth, but at the same time, it means that, you know, the complexity of that module is increasing. So because that complexity is increasing, the industry decided to move some functions that were sitting on the motherboard in the DDR4 generation onto the modules in the DDR5 generation. So that means that, you know, the DDR5 module contains more chips than, you know, than the DDR4 generation, in addition to the DRAM. And those chips, we call them, you know, companion chips. And there are three types of companion chips: an SPD hub, which is a communication chip, temperature sensors, and power management chips.
You know, that fact that we're moving those chips onto the module adds to the TAM, you know, that we are addressing. So earlier, we said the market last year was about $750 million for the RCD chip only. The companion chip TAM in total adds about $600 million of additional TAM content onto the module. So that's a growth, really a growth, vector for us. Mm-hmm.
I think you've previously mentioned that those chips add about a couple of dollars of content. Do you still kinda see that similar, zip code of content addition?
Yeah. The way we look at it as we said, you know, if you take an RCD chip, which is what we do, what we've done traditionally, market $750 million last year, you know, growing mid- to high-single-digit this year, we add $600 million to that TAM, you know. So that gives you an idea of the additional dollar content. You know, $750 million growing mid- to high-single-digit. On the RCD, you add $600 million of content for the TAM to this market with the companion chips.
Sounds good. You also touched upon the PMIC business. This is relatively new. You're expecting some revenue in the second half, I think ramping into next year. How should we think about your positioning in this market? How are you differentiated against the existing offerings?
Yes, that's a good question. So the PMIC is part of what we call the companion chip. So when I talked about that companion chips, one of these companion chips is the PMIC chip. We came late to the PMIC market, but, we believe this is gonna be a very important growth driver for us. And we did this deliberately. You know, when the market transitioned from DDR4 to DDR5, our number one priority was to make sure that we had an RCD chip that was, you know, a leading, chip in the DDR5 world. And that's why we transitioned from about 20+% share in DDR4 to 40% share in DDR5. So it was number one priority. It was absolutely necessary for us to not miss that transition in the market.
The next most important chip on that module in the companion chip family is the power management chip. And the reason for that is, if you look at a memory module, it's very, very dense. It's in a very challenging environment in terms of thermal environment, and it's very prone to noise. So it's very important to distribute the power, you know, in a very efficient way, you know, onto that module. And that's something that is hard to make. You know, if the RCD chip is the hardest chip to make, the next hardest chip to make is the power management chip. So we started to develop this as a capability internally by investing in a team about two years ago. We worked two years, and we have announced, you know, our chipset a couple of months ago.
We have announced our chipset at a time where we knew we had traction with our memory customers. So the customers that buy the RCD chip from us, you know, have given us very strong feedback, positive feedback about our power management chips. So our differentiator there, although we come late, is that we understand the ecosystem. The qualification of those chips is exactly the same process as the RCD chip. We understand the technical environment very well, you know, the constraints on the module, you know, very, very well. So that's another advantage we have. We can also make sure that interoperability between those chips work well because we have both.
The way we introduce our chip, if you read our press release, is that we went for the high end, the most difficult chip to make, to convince our customers that we were a reliable partner there. That's also, you know, a good sign for us to enter that market. We enter in the high end for modules, then, you know, that would calculate down into, you know, the other modules. In the longer run, you know, these type of technologies can be used also in client systems. We're quite confident with the traction we have there. Mm-hmm.
Got it. Now that you have all these three companion chips, you're also ramping the DDR5 RCD, how should we think about the mix and contribution to your product business, going forward?
So, you know, as we said earlier, you know, our position today in the DDR5 RCD chip is to be around 40% market share, and we expect to continue to grow there. Our goal, you know, for the companion chip is to get 20% share, you know, of that market. There are different players. We came, you know, we came late. We're not going to step, you know, to 20% in one go. You know, we're going to go through the qualification process. But in the long run, you know, our goal is to get 20% share of that companion chip market. And this companion chip will start to contribute, you know, this year in the second half, and that explains some of the-...
You know, growth that we see in the second half of this year. Mm-hmm. But this is gonna be through, you know, ramping, you know, with our customers.
Got it. I think this could be a high-level question, but we're seeing a lot of memory architectures move towards the traditional, from traditional towards accelerated. We're seeing more of these HBM memory, less of the DDR perhaps. How would someone like Rambus kinda position yourself in that dynamic, and what are your strategies?
Well, we see this as an exciting time, you know, that people are interested in memory. You know, memory was not exciting for many, many years, and that is probably one of the most exciting, you know, areas. You know, earlier I talked about our patent licensing program. Everyone building HBM, GDDR, LPDDR, DDR, all types of memories, are licensees to us. So, you know, we have a deep knowledge of these type of memories. Now, when it comes to products, let me start with silicon IP. When we start with silicon IP, we have developed IP controllers in those fields. So we have GDDR controllers, HBM controllers, PCIe controllers, CXL controllers. We also have security solutions, and we sell those controllers to people who build chips for AI.
So we have a pretty good view of, you know, the traction we are having there. That explains the growth we've seen in the Silicon IP business. We've sold a lot of these interfaces to people building chips for the AI space, so that's a growth factor for us. Now, when we look at the product side, you know, people oppose, you know, HBM to DDR. We don't, we don't oppose this too much. I think, what has happened last year is that it was a CapEx situation, where, you know, in a fixed CapEx, people had to invest into, you know, the, the AI servers. But this being said, you know, we continue to see growth in standard servers, so that's a vector for us.
We are going to see a refresh of standard servers, you know, starting in the second half of this year, like the rest of the industry. So that's gonna be a growth factor. But if you look at AI servers and the architecture of AI servers, they use both GPUs and HBM memory for the number crunching. But in that same server, you also have standard servers, like standard, you know, x86 servers with standard DIMM memories. And the reason you have this is because when you have these HBM and GPU banks in an AI server, you have to feed those with data that has been pre-processed. And that pre-processing, that cleaning of data is being done by standard servers. And these standard servers use standard DIMMs, and typically, they use high-density DIMMs.
So this has been a tailwind for us, you know? And so that's why we don't oppose this. Because every time you see HBM and GPUs in that same box, you also see standard servers that, you know, people don't talk too much about, with very high-density memory. If you think about it, in the latest servers, you know, from AI servers from NVIDIA, you can have up to 4 TB of DIMM memory, you know, attached to these x86 processors. So, this has been a tailwind for the technology standpoint. Mm-hmm.
Then would you say the AI servers, just because they have the GPUs on top of the traditional servers, they offer incremental content for you?
Yes, they offer incremental content. I, I would say I also see them as, you know, it's been a catalyst for the adoption of DDR5, because these servers in an AI box, they need so much power, so much bandwidth, so much capacity that, you know, the DDR4 platforms could not provide that. So you know, that was a kind of catalyst, you know, for DDR5 adoption. And yes, it answers your question. Yes. Mm-hmm.
Sounds good. You briefly touched upon this, but could you talk about the rise in client opportunities with the memory interface solutions?
Sure. One of the challenges in data centers, the reason we have a buffer chip in particular, is to reconstruct signals. You know, in a data center, you have a lot of signals traveling, you know, on parallel lines in a very noisy environment. And, you know, the role of the buffer chip, which is our traditional business, is to reconstruct those signals to make sure that, you know, you have clean signals in and out of the processor to the memory. On the client space, you also have memory modules, but the speeds were not as high as they were on the data center.
But what you do see now is, and you look at the roadmap of the processor guys when they go to the client space, is that the speeds are going up, you know, higher and higher. And there's a moment where you know, the same type of functionality is gonna be required. You know, our view is that, you know, when the speed on the bus goes up to 6,400 mega transfers per second, then a similar function to what we have in the data center for the RCD is gonna be required on the client space, and that's—these are solutions we're working on.
Similarly, you know, when the performance of these client solutions are going to continue to go up, you know, we will have to think about power management solutions that address, you know, those, those higher requirements. So we see the client space as a, an expansion of TAM for us, you know, with a, with a slight delay to the data center because the speeds, you know, are going to come later. They, they come in the data center.
Mm-hmm. Got it. Onto the more fun business, in my opinion, the Silicon IP.
Yeah.
Could you briefly give us some strategy overview of your silicon IP business, what your drivers are, how you're expecting to grow over the next couple of quarters and in the long term?
Sure. So in the silicon IP business, we offer basically three types of IP. One is high-speed memory, you know, controllers. The second one is, you know, a high-speed interconnect controllers, and the third one is security. And the reason we're focusing on those is, you know, on the memory controller and I/O controllers, this is what the company has done for, you know, for a long time, since, you know, since we were born. And that's something we understand really, really well, and that's something that adds value to the ecosystem we are playing in. So we were the first one to, you know, have an HBM3 controller. We were the first one to have an HBM3E controller. We're actively working on HBM4. We're the first one to have a GDDR6 controller.
We are actively working on GDDR7. Same goes for PCIe or CXL. So, you know, our advantage there is that this is a technology that we understand really, really well in an ecosystem that we understand really, really well, and we differentiate by being at the leading edge of those speeds. We always announce those speeds, you know, ahead of our competitors, and we engage with customers ahead of our competitors. On the security side, we started to invest in security in 2011, you know, through an acquisition. We believe we are the leader as an independent security IP provider. Back in 2011, there was little understanding, you know, in the market in general about the importance of security.
But today, you were talking about, you know, the data center architectures, you know, everything's changing. What's happening is that you have very, heterogeneous architectures now, a lot of chips, a lot of different chips, you know, going into those data center. And those chips have to hold a lot of data, and they have to transmit a lot of data, you know, between the different chips. And what you want to make sure of is that you protect that data. When that data sits in some place, you know, it cannot be compromised, or when the data moves from one place to the other place, it cannot be compromised as well. So security has become a critical component in all of those chips, and this has been a driver for us of growth.
So the combination of high-speed memory, critical to AI, high-speed interface, I/O interface, critical to AI, critical to automotive, critical to all the markets, and security have been growth drivers for us. So our focus on those technology, technologies has allowed us to grow in that market.
Mm-hmm.
This is a market, as we said earlier, that is above a business that is above $100 million for us, and that we expect to grow 10%-15% a year.
Mm-hmm. More on interface IP. You briefly mentioned the HBM3E controller. How should we think about Rambus' opportunity? Excuse me. Rambus' opportunities and content changing as we go from three to 3E, 3E to four, because the industry is moving really fast.
The industry is moving really fast, you're right. You-- That's the nice thing about the memory industry. The processing capabilities are moving very, very fast, and it's been always the case that, you know, memory is a bit lagging behind from a technology standpoint. We call it the memory wall, so we always have to think about new architectures to evolve the memory, to catch up, you know, with the speed of development on the compute side. So it's also true for HBM based architectures. The GPUs become more and more powerful. They today use HBM3 memory, but there's a limitation to what HBM3 can offer, so people are thinking about HBM4. HBM4 actually widens the bus and allows these GPUs to have access to more data faster.
You know, as we do traditionally, especially with bleeding-edge technologies, is we are developing HBM4 technologies internally, and we are engaging with a few leading customers there. This is very advanced technologies, and we're working with the leaders in that market. But that's something that's a natural extension to what we've done. We've been the first in three, and we are working on four now.
Mm-hmm. Just one follow-up there. NVIDIA announced the Rubin platform, which is supposed to launch in 2026. I suppose the latter half of that, but that is utilizing HBM4. So for Rambus, when should we start expecting revenue from HBM4 to really materially kick in?
Without going into, you know, customer, you know, details, I'll say that again. You know, first of all, everyone developing their own HBM4 needs to have a license from us. That's the first thing I would say. And then, we are engaged with leading customers in the market on HBM4, you know, solutions, but because they are leading customers in the market, I cannot be more specific than that. But, you know, we should see, as part of our portfolio, you know, some revenue from, you know, from the HBM family continue to grow, you know, as we move into next year.
Mm-hmm.
Yeah.
Got it. One on CXL, I think you briefly mentioned as well, but I think a lot of the CXL's excitement kind of slowed down a little bit last year and this year. Could you update us on your CXL opportunity, where the ecosystem is today?
Sure. So there are two ways we are approaching the CXL market. You know, the first one is through our silicon IP offering. So CXL is a standard of communication between chips. Basically, this is what it is. When you have heterogeneous architectures and a lot of chips, you have to make sure that they speak the same language, so they have to agree on an interface. And that interface the industry has converged onto the CXL interface and the CXL protocol. So we do have a fair amount of sales of CXL IP to anyone building chip with a CXL interface, and that's been a factor of growth for our silicon IP business. You know, the flip side of that is that everyone developing their own chip.
From a chip standpoint, this is very fragmented. So there are a lot of CXL interfaces, but the market for CXL chips is very fragmented in this first generation of CXL solutions. The other way we go to CXL is we have developed our own chip. We are addressing the question of memory expansion. And that chip is being used today by our customers for proof of concept, but also to try use cases, because the question the industry is asking is: Is there a way to add memory, you know, through a serial bus and a CXL interface? But the industry has not converged on a standard chip solution, and they've not, you know, resolved all the questions of latency and these kind of things.
So we have chips out there that people are using to do you know some testing work. But we believe that the industry will not converge on a standard chip, you know, until at least CXL speed of zero is available on the market. So this is something that's gonna happen not before 2026, 2027 on the chip side. As I said, there's excitement, as you said, there's excitement on the CXL, but the market today is fragmented from a chip standpoint. It's not from a silicon IP standpoint, but from a chip standpoint, it is fragmented. You know, we hope the industry is gonna converge on standard chips, you know, down the road.
Mm-hmm. Got it. I wanted to leave some very important questions for Des.
Yeah.
Gross margin outlook, 61% for products in the first quarter. I think you've guided to be in the long-term range of 60%-65% for the full year. I mean, we're kind of there, aren't we? We're also gonna see more DDR5 mix come in throughout the year. So how should we think about the gross margin exiting the year?
Yeah, we've been very pleased with the performance on the chip, gross margin. As you mentioned, our long-term target is 60%-65%, and we've really managed our gross margins for the longer term. If you look at 2023, it was 63%. 2022, it was 61%. So we're within that long-term margin sort of target, and we've done that by being disciplined in our approach to pricing, as well as driving cost reductions going forward. You're right, we manage our gross margins for the long term. Any given quarter, we could see some movements on this sort of gross margin, depending upon what products are shipping out. But in the long term, I think we feel very comfortable with the sort of 60%-65%, and that's where we've been operating so far this year. Yeah.
Mm-hmm. Do you think the addition of the PMIC would have an impact on margins as well?
I think on the chip side, we'll be able to manage the addition of the companion chips within the long-term targeted model of 60%-65%, and that's the right way to think about the chip gross margin going forward.
Mm-hmm. Got it. And then more about your capital allocation strategy. I mean, you're generating cash pretty strongly, so I would have to wonder what the strategy is.
Yeah, we've, we're very fortunate to have a robust balance sheet, which is debt-free, and we continue to generate a lot of cash. I would say that our capital allocation strategy is really built around three pillars: organic investment, inorganic investment, and return to shareholders, from a capital perspective. Organically, we'll continue to invest in the high growth opportunities ahead of us. Luc talked about some of those today, the companion chips, the power investments, the client opportunities, and it's important we continue to invest at the right level going forward to continue to drive the growth of the company. Inorganically, we have been acquisitive. We've made five acquisitions in the last four years, which has really got our silicon IP business to scale, and that's something we'll continue to look at.
But we'll continue to be disciplined in our approach here to make sure strategically, operationally, and financially, this would make sense to us. And lastly, on the capital return, we have a great track record here. Our target is to return 40%-50% of free cash flow back to shareholders, but in the last sort of three years, we've returned $350 million, which equates to about 60% of free cash flow. I think going forward, that is the sort of playbook that we would expect to see from us from capital allocation.
Sounds good. We could go on forever, but we've run out of time. Thank you so much, Luc and Des.
Thank you.
Thank you.
Thank you.