Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

51st Nasdaq London Investor Conference

Dec 11, 2024

Moderator

All right. Good afternoon, everyone. Thank you for attending and joining us, obviously, throughout the last few days, but specifically this session. I'm excited for our conversation with two of our leaders from Rambus. We have Luc Seraphin, CEO, and Des Lynch, our CFO. So maybe to kick things off, Rambus has over 30 years experience in high-performance memory subsystems and is a fabless supplier of industry-leading ICs and Silicon IP solutions that advance data center connectivity, which helps solve the bottleneck, really, between memory and processing. So, let's get started with maybe if you could bring us up to speed on, really, for those that know the Rambus story or those that are newer to it, just an update on the current business and really the strategic direction going forward.

Luc Seraphin
CEO, Rambus

Sure, so Rambus stands for RAM-BUS. BUS, that connects to RAM. That's how the company started about 30 years ago. The company developed the foundational technology that has to do with interfacing processors with memory. So it's actually used in every computing system today, and the business model that the company started with was patent licensing, so we developed these foundational patents, and we had patent licensing agreements with anyone who wanted to use that technology, whether they were on the memory side or on the computing side, and over time, we evolved that business from a pure patent licensing business into developing technologies that people can use in actual products, so we started by venturing into what we call Silicon IP, where we develop pieces of IP that we test and that we sell to semiconductor companies, and these semiconductor companies integrate that IP into their chips.

And the second thing we did over time was actually to develop chips ourselves. And we became a fabless company. So today, we have these three businesses happening at the same time. Patent licensing continues to be the bedrock of the company. It generates about $200 million of revenue at very high margin. It's a strong cash generator for us, very stable. And we inject this cash into the two other ways we go to market, in the Silicon IP business and in the product business. The Silicon IP business is about $120 million, growing 10%-15% a year. And the product business has been the highest-growth business for us. We started the product business in 2018. At that time, our revenue was about $39 million. Last year, it was $225 million. So we've seen very nice exponential growth there.

As we look into the future, we will continue to see growth from the product business and from the Silicon IP business. There's a need in the industry to continue to resolve that question of adding memory capacity and memory bandwidth to computing systems. That creates a lot of opportunities for our business going forward. And we'll talk about it when we talk about our future products.

Moderator

Yes. Well, I definitely have questions for all three of those. But maybe we'll start then with memory. And obviously, you've seen solid growth in the chip business. Can you explain your position in the memory interface segment and the opportunities you see there?

Luc Seraphin
CEO, Rambus

Sure. So that's a business we started, as I said, in 2017, 2018. We had 0% share. And the reason we entered the business was really one of our main licensee, patent licensee, came to us and said, these technologies are going to be harder and harder to make and to develop. And it's important that we have enough suppliers in the world to actually provide that technology because it's critical to the development of anyone building chips for data centers. It's critical to the data center people themselves. And so this suggested we entered that business. We entered that business with 0% share. And over the years, in that generation of product, which we call the DDR4 generation of products, we grew our share over time with those licensees up to about 25% share.

Then the whole market transitioned from the DDR4 generation of product to DDR5 generation of products, and during that transition, we actually stepped up our share. Our share in DDR5 is about 40% now, and the reason we gained that share is really precisely as our licensee was telling us, these things are harder and harder to make. And it's going faster and faster, so we invested in such a way that we were always the first to introduce a new generation of products compared to our competitors, and by doing so, we were able to grab market share at every generation of product.

Moderator

I see. So regarding the DDR4, you've noted in your earnings that the DDR4 inventory digestion was a headwind for the business through 2023. So how do you see sales of the DDR4 continuing from here?

Luc Seraphin
CEO, Rambus

Yeah. The transition from DDR4 to DDR5 was a bit hectic, to say the least. DDR4 inventories were high because after the supply crunch, all customers bought a lot of DDR4 as they were afraid of not having enough parts, but they actually built inventory. At the same time, DDR5 was ramping into the market, and when DDR5 started in earnest, customers were left with a lot of DDR4 inventory, and it took time for the industry to digest that inventory. What we've seen is for six quarters in a row, the levels of inventory in DDR4 were going down, but at a slow pace, if you wish, and DDR5 has started to ramp in earnest, and it's the majority of shipments today, so as a consequence of that, we do see DDR4 sales still minimum. Des mentioned in the last call that it's about $5 million a quarter.

Most of our product revenue today is DDR5. DDR4 is going to continue, but at low levels with a long tail. It's not going to grow. DDR5 is going to take care of the growth.

Moderator

What would you say the competitive advantage is in the market for the DDR5?

Luc Seraphin
CEO, Rambus

Our competitive advantage is really that we are very focused as a company. We develop products typically faster than our competitors, and we introduce those products to the ecosystem faster. It's important because these products are sold to the three memory vendors in the world, SK Hynix, Samsung, and Micron. They put those products into modules, and they sell those modules to the CSPs, the Microsoft and Meta and Amazon of the world. That qualification process in that ecosystem is very complex and very long, so if you're first to introduce the products, your customers and your customers' customers apply their engineering resources to you first when they develop their systems, and they go to the next competitor as a second thought, if you wish, but when you're first, you establish the path, and your competitors have to adapt to what you've done.

So being first to introduce every generation and sub-generation of products is explaining why we've gained share in that market.

Moderator

It wouldn't be a conference if we didn't talk about AI. But you actually play a very integral role in this. As you look at the opportunity in an AI server compared to a traditional general-purpose server, and specifically as the industry starts to transition from AI training to AI inference, how do you see that opportunity for Rambus?

Luc Seraphin
CEO, Rambus

So AI is certainly a very interesting opportunity for us because it, again, plays on the need to have more memory per processing unit, if you wish. And that's exactly where we play. An AI server has two basic components. It has GPUs with HBM memory to do the number-crunching AI function. But it also had standard servers inside the same box. And the standard server's function is to prepare the data for the GPUs and HBM, store the data, cache the data, format the data so that it can be used by the GPUs. So in a traditional server, the way we look at it is if you have an average one terabyte of DRAM, when you move to the first generation of AI server, you have two terabytes of DRAM in the same DIMM format. And in the second generation, you have four terabytes.

So the memory that you need for a standard server in an AI box is twice the one that you need in a standard one. And that's good for us because the more memory you need, the more modules you need in that server, the more chips we can sell. So that's the first benefit. It's just a sheer volume generated by these AI opportunities. The second positive aspect for us has been that in AI servers, the need for capacity and bandwidth is so high that it could not be met with a DDR4 generation of products. So it was a kind of catalyst for the adoption of DDR5 during that transition. It was absolutely necessary for these servers to use DDR5. And that actually triggered it was the tipping point for that transition from DDR4 to DDR5.

Because we have a better position in DDR5 than we had in DDR4, it actually allowed us to gain share and to solidify that share in the market.

Moderator

You recently announced a new family of MRDIMM solutions, which offers a significant uplift in memory bandwidth and in capacity. Can you talk about the significance of this technology and maybe provide some additional color on the size of that market and revenue opportunities?

Luc Seraphin
CEO, Rambus

Yes, absolutely. So MRDIMM stands for multi-rank DIMM. So this is a DIMM with multi-rank. And I'll explain what this is. The main challenge that the industry faces in a server is that with the software workloads that we see today, especially in AI, you need more and more memory capacity, so more memory, and more and more memory bandwidth, which is more speed of transmission between the memory and the processor. And there are different ways you can add memory capacity and memory bandwidth. The first way is people making processors like Intel and AMD, they continue to increase the number of memory channels per processor. But there's a limitation to that. Today, they offer 12 channels per processor. But the more memory channels you add per processor, the more pins you add to the chip. And there's a limitation to this for mechanical reasons, thermal reasons, and everything.

So we've reached probably the limit of number of channels that people can have on any given processor. The other way to increase memory capacity is for each one of these channels, you can put one or two DIMMs. The issue there is that if you have one DIMM, you benefit from the full bandwidth. If you add a second DIMM to increase capacity, it's typically at the expense of bandwidth. So you have some compromise to make there. But once you've exhausted all of this, you cannot go anywhere. You cannot add more memory to that processing unit. So the idea of MRDIMM is to say, well, why don't I design a module which has twice the capacity and twice the bandwidth of a standard module? And the way to do this is by, and I don't want to be too technical, multiplexing access to two ranks of memory.

That's why it's called multi-rank. So you benefit from the full speed of the processor bus with one module that has twice the capacity and twice the bandwidth. The benefit is really that you can double capacity and double bandwidth without changing the architecture of the server. So you can use the existing architecture, the existing infrastructure, and use that existing infrastructure to double capacity and bandwidth. So that's one benefit. The second benefit is that like any product we develop, it's defined by JEDEC, which is a standard body. And because it's defined by JEDEC, everyone uses those products. And no one hesitates to invest in those products. We know what product we have to develop. We know Intel, AMD, and all the processor companies are going to use that interface. We know the memory module guys are going to use that interface.

We know our competitors are going to build the same product. So the whole ecosystem actually invests into that next generation of product. So MRDIMM is a JEDEC product as well. So it shows that the whole industry is behind it. That's the second benefit. The third benefit for us is that when you look at an MRDIMM as opposed to a standard DDR5 DIMM, the content is much higher. So you still have the same RCD chip, PMIC chip, temperature sensors, SPD Hub. I guess all acronyms. But these are the chips you have on a DDR5 DIMM. But in addition to that, to be able to do that multiplexing, you need to add 10 new chips at the same time that we call data buffer chips that allows you to double that capacity and bandwidth.

By adding all of these chips, what it means is that the content on an MRDIMM is about four times the content that we have on a DDR5 DIMM, which was itself higher than the content of a DDR4 DIMM. We have all of these benefits of increasing capacity and bandwidth without changing the infrastructure and also multiplying the content by a factor of four. You ask about market size. The market size for the standard RCD chip plus the companion chip on the current generation of DDR5 is about $1.3 billion. $750 goes to the RCD chip, and the rest goes to the companion chips, so $1.2-$1.3. The MRD opportunity adds about $600-$650 million of TAM to that $1.3 billion. It's a very nice opportunity for us.

Moderator

There continues to also remain and be interest in CXL. And you've previously talked about your investments in IP and for the products. Can you provide an update on how you see these opportunities evolving?

Luc Seraphin
CEO, Rambus

Yeah. So we play in CXL. You remember we had three parts of our business: patent licensing, Silicon IP, and products. In CXL, we mostly play today on the Silicon IP side. So we have a CXL controller that we sell to anyone who develops a chip that needs a CXL capability. And these customers can range from small startups all the way up to large blue chip companies. And what we see through these engagements with these companies that buy that IP from us is that there's not one single CXL chip in the market. It's a very fragmented market. So customer A develops their own bespoke product for probably one customer. And then the next customer develops a different product for a different customer. So in aggregate, this is a very interesting market. But it's very, very fragmented.

For us, it's been a nice driver for the growth of our Silicon IP business. For the product business, we do have a product in the hands of the customers. But we have not commercialized that product because of the economics of a single product for a single customer type of thing. And we're more waiting to see if the industry converges onto a common product that can apply to different customers. The other thing that is happening is we just talked a bit earlier about MRDIMM. One of the main use cases for CXL was to add memory to a processor through a CXL bus. It was an interesting idea. But it comes with its own technical constraints, in particular with latency. But the MRDIMM is actually a better and more elegant solution to that because it's an industry-defined product that uses the exact same infrastructure and same architecture.

So I think some of the use cases that we saw earlier for CXL for a product are going to be taken over by MRD types of solutions. So in summary, very good growth driver for our Silicon IP business. More than on the product business, we think MRD has a better future from that standpoint. But we are in a wait-and-see mode. We're ready to launch the product when the market is ready.

Moderator

Just staying with the Silicon IP, in addition to CXL, are there other large opportunities that you see there?

Luc Seraphin
CEO, Rambus

Yeah. On the Silicon IP business, we have a very focused portfolio of solutions that are the leading edge of technology for key markets such as AI. So in summary, we have memory controllers, HBM controllers, and GDDR controllers. HBM controllers are critical for AI. We know that GPUs interface to HBM memory. So that's critical. GDDR controllers are going to be critical for AI inference in particular. So on the memory side, HBM and GDDR are very focused to AI. On the connectivity side, it's CXL, as we talked about, and PCIe, which are also critical to communication between chips in an AI server or other types of servers. So that's on the memory and interconnect. And the other part of our portfolio is security. Security is extremely important.

When you have these systems with a lot of chips, a lot of GPUs, DPUs, processors, you want to make sure that you protect your data. Your data is very, very important. So we have two types of security IP, what we call data at rest, to make sure that the data that sits in a chip is protected, is genuine, has not been tampered with. And the other part is data in motion, which allows you to make sure that when you transmit data over a link between two chips, it cannot be corrupted. So that portfolio is really focused, but it's really focused on applications that are at the leading edge of what people develop today: AI systems. Some of the technologies in security expand beyond data centers. You find them in automotive, government applications, all sorts of applications.

In general, if you look at our product business and Silicon IP business, about 70%-75% of that is going to data center type of application. On the product side, only 100% goes to the data center today.

Moderator

Yeah. Transition a bit to the patent licensing piece of the business, and I know you gave a great overview just on kind of the genesis of Rambus and this being very foundational for you to both grow and invest in the chip and Silicon IP businesses. Can you provide some additional color on the licensees or how investors might think about this model of the business?

Luc Seraphin
CEO, Rambus

Yeah. So licensing business, it's a business that is about $200-$210 million a year, 100% margin. So it's a good source of cash. One of the challenges the company had a few years back is that our licensees were looking at us. And they said, well, we're actually paying all of that money, which is normal. But what are the products you're developing for us? So what we did as a company is we eliminated a lot of activities that were not really critical to these licensees. And we invested in a roadmap that mattered to their roadmap. So our relationship with our licensees over time has changed completely. Our three largest customers for our chip business today are our three largest licensees. So we've moved from more of a confrontational type of relationship to a very collaborative relationship.

That's one of the reasons also that we're not attempting to grow our patent licensing business abnormally high because we benefit from that type of relationship we have with the ecosystem. We continue to invest in our patent portfolio. It's important to our licensees as well that they see that we continue to invent for the future. We have a small group called Rambus Labs. Their mission in life is to think about the technologies that are going to be relevant to interfacing processors and memories in five or 10 years from now. It's really looking forward into the future. The life of a patent is about 20 years. The life of a patent license agreement is between five or 10 years. Many times, the patents that are relevant for a five-year contract continues to be relevant for the next five years.

And then we continue to feed our patent portfolio with new patents. We don't see this. But we renew contracts about four to five times a year, depending on our licensees. We don't communicate about that because every contract is kind of confidential. But there's a lot of activity in renewing to build this $200-$210 million business.

Moderator

On the financial side, obviously, you have a strong financial model which generates strong margins, as you shared, and high cash flow. How should we think about the long-term financial model for the company?

Des Lynch
CFO, Rambus

Yeah. That's a great question, Jack. What I would say is, Luc mentioned there's three ways that we go to market. Our patent licensing business has been stable at that $200-$210 million level. And that would be the expectation going forward there. On our Silicon IP business, the business is about $120 million this year. That's up about 10% compared to last year. The expectation here would be that we would grow 10%-15%, which is faster than market growth there. And then lastly, on the chip business, as Luc mentioned, we've continued to see share gains as well as content increases. And that's the high growth area of the business going forward. As we move down the sort of P&L, we are fortunate to have a rich gross margin profile. It's 80%-85%, really driven by the patent and IP high margin business there.

But also on the chip side, we continue to have strong gross margins, which are in the 60%-65% range from there. From an OPEX perspective, we continue to be disciplined in our approach. From an R&D perspective, we're probably spending around 25% of revenue on R&D, which is important. We continue to fund high-growth chip opportunities ahead of us. Luc touched upon quite a few of them earlier on today. But the right model going forward there would be 23%-25% of revenue from there. And really, on the SG&A side, we spend about $18-$20 million per quarter. We've major investments into the infrastructure sort of side. So it will just be more inflationary type of increases. And you'll see really nice leverage on the SG&A side as we continue to grow the top line.

So everything flows down to a 40%-45% op margin model with really high cash generation from there. So we're very pleased with the way the financial model is playing out from there.

Moderator

Yeah. Those are impressive numbers. Maybe last one, just sticking with the cash generation. Obviously, you have a great track record with that. What are your thoughts on your capital allocation strategy?

Des Lynch
CFO, Rambus

Yeah. That's a good question. We are fortunate to really have a robust balance sheet with strong cash generation, and that's enabled us to really enact a stable and consistent capital allocation policy, which is really built around three pillars. One, it's organic investments, inorganic investments, and capital return to shareholders. Organically, it's important we continue to fund these high-growth opportunities ahead of us, which will lead to the TAM expansions that Luc sort of talked about earlier from there. Inorganically, we have made five acquisitions in the last five years. This has really been smaller acquisitions, but it's been really important, and it's enabled us to get our Silicon IP business to scale from there. We continue to have a rich funnel of M&A opportunities, but we'll continue to be disciplined in our approach here. Strategically, financially, and operationally, it has to make sense from there.

Lastly, on the capital return, we really do have a 40%-50% free cash flow return back to shareholders if you look over a number of years. Just this year, we have bought back about $113 million of shares. That follows on from $100 million buybacks, both in 2022 and 2023. We have that sort of consistent approach to capital return there. What I would say is looking forward, I would say it would be the same sort of playbook that's been very successful for us from there.

Moderator

Great. With that, I do want to open it to the audience to see if there's any questions.

Can I ask a question?

Yes. I don't know if we have a question in front of all. OK. She's hustling over. Thank you.

Thanks so much. So what you're saying now is very similar to what you said one year ago, which is great. But I'm wondering what you have sort of learned over the past year? Is it maybe that the next product generation has taken a little bit longer time to ramp up? Or what? I don't know. Put it differently. What's the most misunderstood thing that is not reflected in your share price because it's not doing that well?

Luc Seraphin
CEO, Rambus

The thing that has happened during the transition last year is the traditional server market last year went down, and it was a function of CapEx allocation from our customers' customers. When they had fixed CapEx, they rushed for the right reason. They all rushed into building the AI infrastructure, and by definition, an AI server is much more expensive than a traditional server. As a consequence of that, in terms of volume, the traditional server market went down last year, but what we did last year when the traditional server market went down, our revenue stayed flat. So we actually gained share despite that, so that's one thing that has happened. There was a little hangover of that situation in the first half of this year.

But what we said in the first half of this year is that we would see a recovery of the traditional server market in the second half. And this has happened. If you look at our revenue for products this year, the second half, if we take the midpoint of Q4, is going to be 30% higher than the first half. So we see that recovery there in the second half of the year. But basically, this is what has happened. The other thing that is happening in the market that is worth noting is the rollout of new generations and sub-generations of products has accelerated quite a lot. When we were in the DDR4 generation of products, we had to design and launch a new product every other year. Today, in the DDR5 generation, we have to design and launch products every year. So the pace has doubled.

The number of chips on each one of the modules has also increased. These are the two things that have happened. From a market standpoint, it was the dynamic with a traditional server. From a development standpoint, it has to do with the acceleration of product rollout and the expansion of the product offering into the modules.

Moderator

Great. Any last questions? We've got one more here.

Just a question on the 25% R&D. How much of that is directly kind of customer-led development as opposed to your own research?

Des Lynch
CFO, Rambus

It's not customer-led. Most of our, well, all of our products, whether they are in Silicon IP or on the product side, they are defined by the industry. So we don't charge NRE for the development of products if this is your question. We know what the roadmap is for the industry. And we use the cash inflows from our different businesses to develop that roadmap. But it's not customer-driven. It's more industry-driven than customer-driven, I would say. Yeah.

Moderator

All right. Any final questions? We are at time. Cool. All right. Well, Luc, Des, Sir Des, thank you so much for joining us today in this conversation.

Luc Seraphin
CEO, Rambus

Thank you.

Moderator

All right. Thank you for joining us.

Powered by