Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

53rd Annual Nasdaq Investor Conference

Dec 10, 2025

Moderator

So, thank you all for joining us, as you see on the screen and here ahead. Joining me on stage, we have the Chief Executive Officer for Rambus, Luc Seraphin. Thank you so much for.

Luc Seraphin
CEO, Rambus

Thank you.

Moderator

For being here with us today. So, Rambus has over 35 years' experience in high-performance memory subsystems and is a fabulous supplier of the industry's leading ICs and Silicon IP solutions that advance data center connectivity, which helps solve the bottleneck between memory and processing. You know this better than I. So, for the audience, maybe we can just kind of level set on the Rambus story and bring everybody up to date on how the business is doing thus far, and maybe start with the outlook for 2026.

Luc Seraphin
CEO, Rambus

Sure. You know, as you said, we started 35 years ago working exclusively on memory interface technology, the technology that is required to connect processors with memory. And obviously, these technologies today are critical to data centers and to AI because that bottleneck between memory and processors is what is making these technologies more and more advanced. We started as a patent licensing company 35 years ago. We still have a patent licensing business. That patent licensing business is about $210 million a year, 100% margin. It's very stable. It's a cash generator for the company. But that's not a business that we expect to grow in the long run. We've actually transformed the company into a company that delivers technology for these memory vendors. And we deliver the technology either in the form of Silicon IP.

We basically develop silicon blocks that we sell to semiconductor companies, and these semiconductor companies integrate those blocks into their SoCs, their ASICs, their products. That business, the Silicon IP business, was $120 million last year and is growing 10%-15% a year. Finally, we make ICs. We have a fabless semiconductor model. We started to make ICs seven to eight years ago with a 0% share. We started from scratch. That business this year, if we take the midpoint of our guidance for Q4, is going to be about $340 million. And that business grew 40% over last year. That's a very high-growth business for us. It's driven by the demand from data centers, whether they are AI-related or non-AI-related. It shows the strength and the importance of memory in those data centers.

Moderator

Yeah. So speaking of memory, you've had some nice growth in the memory chip business. Maybe we can start with, for that, just the overall size of the market and what are some of the competitive dynamics there and some of the strategic dynamics for you?

Luc Seraphin
CEO, Rambus

Sure, so the chip we develop actually interface chips that sit on the memory modules and the interface between processors and memory. This is the simplest way to describe that. The market for that interface chip, we estimate at about $800 million a year, and that's the base. When the memory market moved from the DDR4 generation to the DDR5 generation, we found that you have more chips on the module than the simple interface chip, so we're adding SAM or TAM to what we address, so in addition to these $800 million TAM, we have what we call companion chips, which are all the chips that we have to develop for that module, and these companion chips represent about $600 million additional to the $800 million.

And then these technologies are going to expand outside of the data center into high-end client systems, high-end PCs, because we're going to find the same type of challenges from a technical standpoint there. That adds another $200 million of TAM. And finally, there's a new technology that we may talk about later, which we call MRDIMM. It's a technical term, but that adds another $600 million. So you have these layers of TAM expansion based on these technologies that we're going to go after.

Moderator

Okay. And the transition to DDR5 brought about additional chip opportunities in the form of companion chips, as you were saying. Can you talk about your chip offerings today, the size of that market, and the overall timing and expectations for both the revenue as well as market share expectations?

Luc Seraphin
CEO, Rambus

Sure. So when the market moved from DDR4 to DDR5, the industry decided that some of the functions that were performed on the motherboard in a computer had to move to the module, to the memory module, because of the complexity. So when you move from DDR4 to DDR5, you're adding companion chips on the module in addition to the interface chip. And as we said earlier, the interface chip TAM is $800 million. The companion chip TAM is $600 million. These chips, they are a power management chip, which is a chip with the role of managing power on that module, delivering a very stable power at different levels of voltage in a very controlled sequential way. There is a controller chip that we call SPD Hub, and there are two temperature sensors.

So there's three types of new chips, four chips in total, because we have two temperature sensors in addition to the controller chip. And as I said, it moves the TAM from $800 million to $1.4 billion.

Moderator

Okay. As the industry transitions from AI training to AI inference, what is the Rambus content opportunity in an AI server, maybe compared to a traditional general-purpose server? And what does that mean for your chip opportunity? Does that have an impact on the product cadence or content?

Luc Seraphin
CEO, Rambus

Sure. So AI has been generally very good for us. There were questions at the beginning about whether AI servers would cannibalize traditional servers. But I think some of us did not realize that if you take an AI server, you have a traditional server in that same box. You have GPUs and HBM that a lot of people talk about. But next to it, you have a traditional DDR with a traditional x86 processor. And the reason is to feed the HBM and the GPUs, you need to cache the data, prepare the data. And it's all done in a traditional server. So AI has been a catalyst for us for the adoption of high-end traditional servers. So you have those that are growing. In addition to that, you have the traditional servers where you don't do AI at all, which continues to grow because there's this refresh cycle.

Inference is very, very interesting because AI started with machine learning. It's very computer-intensive, very expensive. The idea that you do inference on the same equipment is actually a very expensive idea. If you want to monetize AI, you have to move to inference. And inference, according much more on traditional servers because it's simpler, it's on a smaller set of data, and that drives demand for traditional servers. We see that evolution in a very positive way. AI training was a catalyst for the adoption of the technology, and AI inference is a growth driver going forward.

Moderator

Yeah, a lot of tailwinds there. You mentioned this before, so I do want to come back to it. Your MRDIMM, that's a mouthful, solutions. So those offer a significant uplift in bandwidth and capacity. Can you talk about the significance of this technology and perhaps provide some additional color on the size of that market and the revenue expectations?

Luc Seraphin
CEO, Rambus

Sure. So we call it MRDIMM. And the idea here is servers, whether they are traditional servers or AI servers, they're hungry for more capacity and more bandwidth. You want to access more data faster. And that's a problem that the industry has always had. And MRDIMM has been defined by a standard body called JEDEC, like all the products we make. And the idea is to double the capacity, memory capacity on a single module by putting memory on both sides of the module and doubling the speed of access to that memory by using special multiplexing techniques. And this has been defined by the industry. And the beauty is you can double the capacity and double the bandwidth, but you can use this on an existing infrastructure. You don't have to change the architecture of the server.

So you can remove a standard module and replace it with an MRDIMM, and you have double the capacity and double the bandwidth. So that's the significance of it from a technical standpoint. From an opportunity standpoint, for us, it's really nice because it actually multiplies our content by a factor of four between a standard module in DDR5 today to an MRDIMM. And the reason the content is multiplied by four is because you have more memory and higher speed. You have a more complex and therefore more expensive interface chip. You have a more complex power management chip. You still have two temperature sensors and an SPD Hub, but you also have to add 10 chips that did not exist on a standard DIMM that have to do with the multiplexing of the data.

So if you take all of these together, the content on an MRDIMM is four times the content of what we offer today on a standard DIMM. And in terms of timing, it's going to be linked to the rollout of platforms from mainly Intel and AMD. So next-generation platforms from Intel and AMD, Diamond Rapids and Venice, are going to be the first platforms that are going to be MRDIMM-capable, if you wish. And these platforms should ramp in the market by the end of 2026, the beginning of 2027.

Moderator

Okay. And there continues to be a lot of interest in CXL. And you previously have talked about investments in IP and the products. How do you see those opportunities developing?

Luc Seraphin
CEO, Rambus

So in CXL, we had two ways of approaching CXL. As part of our Silicon IP business, where we build pieces of IP that we sell to people who build chips, we have a CXL offering. So people who build chips that have a CXL interface can buy that interface from us or that controller from us. And in that area, we have a large number of customers. We sell them license per use. And if they use our CXL controller, they pay us a license to use that controller. We also had a CXL product development, and we have a CXL product, but we've not commercialized that product because we realized that CXL defines an interface. It doesn't define a chip. So throughout our Silicon IP business, we realized that customers develop CXL chips that are custom to their customers.

So it's almost a one-on-one relationship, one chip for one customer. So the market is very, very fragmented. It's an interesting market in terms of its total size, but it's a very fragmented market. And the main use case for CXL was memory expansion, the idea to add memory to a processor. And we thought MRDIMM was a much more elegant way of doing that for all sorts of reasons, technical reasons, but also due to the fact that MRDIMM, as we said earlier, is defined by the industry at the product level, and it uses the current infrastructure. So the implementation is much easier, much more elegant. So we continue to watch that area. And in summary, our CXL offering remains today as part of our Silicon IP business, and we don't have a CXL product per se.

Moderator

Okay. And sticking with the Silicon IP business, can you give a brief strategy for an overview of that strategy, I should say, and a bit on what are those key drivers and what does that long-term growth look like?

Luc Seraphin
CEO, Rambus

Yeah. So in the Silicon IP business, where we actually develop those pieces of IP that people used when they developed those chips, if you take large semiconductor companies, they don't have the resources to develop every part of the chip. So they buy this IP from people like us. We have a very focused laser-focused offering in Silicon IP. So we play in two areas. One is security. Security is becoming really, really important in data centers, in automotive, in IoT because of all of this data that you have to deal with. You have to make sure it's never corrupted, whether it sits on the chip or whether it's being transmitted between chips. You have to have those security technologies to make sure the data is not corrupted.

We are probably the largest silicon IP provider for security as an independent company because we invested in that business in 2011, a long time ago, way before people saw security as being so critical. So that's half of the silicon IP business, security. The other half has to do. It's very laser-focused as well with high-speed interfaces, everything that is required in new generations of servers and AI servers. So we've all heard about HBM memory. So we have HBM controllers, GDDR controllers. We have PCIe controllers, which are high-speed links that go into these chips. So that business in total is about $120 million, as we said earlier. It's roughly split half-half between security and high-speed interfaces. It's a business that grows 10%-15% per year.

The growth drivers is really the rapid evolution of data centers and the need for more speed and more security for your data. We're very focused, but the demand for these technologies is very high because of the demand for data in the data center.

Moderator

Yeah. Building off that, as we kind of pull back a little on just the China exposure, we're hearing that EDA companies are being left out of that market a bit. Would you say that's a similar risk for your IP business?

Luc Seraphin
CEO, Rambus

The risk for us related to China remains very small. We actually have very little exposure to China in terms of customer base. It's less than 5% of our business. We monitor on a regular basis whether there are any bans that apply to us, which is not the case, so we don't have any of these situations affecting us today, and if it did, the materiality would be very low given the low revenue we have from China.

Moderator

Okay. I do want to go back to the patent licensing that you kind of clicked on for the overview, and that, again, has provided such a solid foundation for Rambus to grow and invest in both chips and the Silicon IP business. Can you provide some additional color on those licenses?

Luc Seraphin
CEO, Rambus

Sure, so as I said earlier, it's how the company started. We have really invented these DDR type of technologies. That's why it's so widespread, so anyone who builds memory or anyone who builds a chip that has to interface to memory has to be a licensee of our technology. Typically, the lifetime of a patent is 20 years, so once you have a patent that is relevant, its lifetime is 20 years, which is very nice. Typically, a contract on patent licensing is anything between 3 to 10 years, let's say 5 years on average, so when we renew our patent license agreements, the patents that were relevant in the previous agreement most of the time continue to be relevant in the next agreement.

And the other thing that we kept at Rambus, we kept a group called Rambus Labs who continue to work on generating new patents that we think are going to be relevant down the road. So as I said, from a business standpoint, it's a flat business, $200 million, very high margin. But it also gives us a lens into the future because we have to look at what we think is going to be relevant 10 years down the road. And that's a very nice thing about Rambus. If you look at the three businesses we have, patent licensing gives us a very long-term view about what's going to be relevant in terms of foundational technology 10 years down the road.

The silicon IP business, because we work with most of the semiconductor companies and they develop chips that are going to be in the market three to five years down the road, that gives us a very good insight about the trends in each one of those markets, and then we have our chip business, which is growing very fast, where the roadmap is defined by the industry, which gives us comfort about investing our dollars because we know that the whole industry is behind those roadmaps, and they all go together. The three main patent licenses that we had when we started the business, who are the three main memory vendors, have become our first three customers in terms of products, so it's been a nice transition for us. Our main licenses have also become our largest customers on the product side.

So we've completely changed the relationship with them.

Moderator

Yeah. Wow. And again, understanding those trends allows you to really skate to where the puck's going to be. Maybe off that, obviously, your financial model is incredibly quality in the sense of it generates strong margins and high cash flow. Can you give a little picture on how you think about the long-term financial model and capital allocation?

Luc Seraphin
CEO, Rambus

Sure. So if you look at the three businesses, the patent licensing business is a 100% gross margin business, which is quite rare. The Silicon IP business is about 95% gross margin. And the product business is between 61% and 63% gross margin. The mix is going to change going into the future. We're going to continue to grow our product business, which is relatively lower gross margins. But we see some benefits on the leverage on OpEx because once you invest, there's a level that you don't need to increase to keep working on your roadmap. So we have an operating margin of about 40%-45%, and we expect to keep that even though the mix is going to change. In terms of cash, in the last 12 months, we've generated $300 million of cash from operations. So we generate a lot of cash.

Our priorities are obviously to continue to invest in our organic growth. With the dynamic in the data center, we have to develop more products. We talked about more content, so this means more products. We have to develop them at a much higher pace. For example, in the DDR5 generation, we have to develop a new interface chip every year. In the DDR4, it was every two years. So we had to double that. Organic growth continues to be one of our focuses. We continue to look at acquisitions. We've made a few small acquisitions over the last year, five or six acquisitions, mostly in the Silicon IP side. That's something that we continue to monitor as a vector of potential growth. We finally return cash to our investors.

We have the goal of returning 40%-50% of our free cash flow back to our investors, and we've been on track for doing that over the last past years.

Moderator

Yeah. Great. I do want to pause here to see if there's any questions from the audience. I've got one there back to my left.

Thanks for taking my question. One question on Wafer Scale Engine. Is that an opportunity for you guys, or do you see that as a threat coming from Cerebras?

Luc Seraphin
CEO, Rambus

On what engine?

Wafer Scale Engine, Cerebras?

No, it's probably not a threat because I actually.

You don't know those guys?

No, no. We have a lot of questions about the threats of new processors that are competing with Intel processors or the new processors based on ARM architecture or the threats coming from different types of memory like HBM. And what we see is it's actually we've never seen any cannibalization there. Because we are fundamentally working on the interface, we're kind of agnostic as to what processors are being used, what core is being used in that processor. And as we said earlier, AI has created this kind of trigger or catalyst for the use of standard memory as well. So we always look at threats. I cannot comment on this particular one. But a lot of threats that we've seen have actually been catalysts for us because we're fundamentally at the interface. And what's built beyond that does not really affect what we do.

Moderator

No, makes sense. Thank you for your question.

Luc Seraphin
CEO, Rambus

Good question, yeah.

Moderator

We have one more back here.

Thank you. Thank you. So you've been pioneering the Silicon IP designs. Are you facing some competitive threats from fast followers? I'm thinking of eMemory in Taiwan, especially in the domain of security, hardware-based security versus software-based.

Luc Seraphin
CEO, Rambus

So we are mostly a hardware-based security company. Our root of trust is a hardware root of trust. We believe from the very beginning that hardware-based security is much more robust than software-based security. And at the beginning of that business, by the way, we were competing with very strong software-based security companies that fell off the road. Yes, we have fast followers. But I would say that with all humility, we feel confident with our position in the security Silicon IP business because of the history we have there. I would say that we do see fast followers. What we do see as well is we compete a lot with internally developed solutions. Some customers throughout the last few years have gotten into the habit of developing their own security. But more and more, they rely on external companies because, again, the pace here is really, really fast.

The industry talks a lot about quantum computing, for example. Quantum computing is going to give computers so much more power that some of the security protocols and security solutions that are out there today are going to be not good enough in the future. So, for example, in our security offering, we have developed a quantum safe core that allows our customers to be ready for when quantum computers are going to be out there trying to attack the security of their chips. So it's a fascinating area where we believe we keep having a lead.

Moderator

Yeah. Well said. Well,

any comment?

Sorry, go for it.

Any observation on eMemory specifically or not particularly?

Luc Seraphin
CEO, Rambus

On e? Sorry?

eMemory in Taiwan.

Yeah. They're one of the competitors. But as we indicated earlier, we keep having a lead in that market. Yeah.

Moderator

Yeah. We can follow up on that. I do want to be thoughtful of time. But thank you all for joining us, Luc. Thank you so much.

Thank you.

Powered by