All right. Good afternoon, everybody. Thank you for being here. I am Marco Lagos. I'm Morgan Stanley's Head of U.S. Semiconductor Investment Banking. I am thrilled to be here with Luc Seraphin, CEO of Rambus, this afternoon. Read a quick disclosure here. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative. With that, Luc, thank you for being here. Great to have you. Why don't we start sort of high level, you know, I'm gonna take it easy on you given the crazy 48 hours of travel you've had, which is symbolic of all the great client and customer interaction you're having. Let's start at the top.
Framing the business, just in general, what is Rambus today? The question really is, you know, I think a lot of investors think about Rambus primarily as an IP company, that is IP and licensing company. How would you describe Rambus today, and what has changed the most meaningfully in the last, call it three to five years?
Yeah. For some people, Rambus is known as an IP company. That's understandable. We still have a very strong and very enduring, you know, IP portfolio, which is foundational to our business. We're not only an IP company. Today, we are really a system-relevant semiconductor company that offers complex chips, chipsets, but also system solutions to the data center memory subsystems, basically. We've shifted the company over the last two to three years from being essentially an IP company to becoming a semiconductor company that offers complete system solutions for data centers and AI.
Wonderful. Like just kind of touching on that, though, talk a little bit about the economic model, you know, product revenue, royalties, and IP. How do those things all sort of interact, and why is that mix strategically attractive?
We have an interesting business model intentionally, by the way. You know, on one hand, we have our IP business that brings, you know, cash flows, that brings longevity because we have long-term agreements, and that brings us reach in terms of the number of customers that we touch and the number of platforms that we touch with that IP business. We combine this with a product business, a semiconductor product business that brings growth to the company, that brings customer intimacy, that business, and that brings also relevance into the memory subsystems for AI. The combination of the two is quite unique in the semiconductor industry. It's a very balanced business model that allows us to invest in innovation.
You know, that high cash flows that we get from our IP business allows us to reinvest into products in a market that keeps accelerating in terms of the type of platforms that we have to roll out to our customers.
Yeah. It's not just the sort of the magnitude of the cash flow, it's the predictability of it that lets you sort of plan ahead as well.
Yeah. The magnitude, the predictability, those two aspects, you know, allow us to go through cycles, while continuing to invest in new products and new solutions.
You've now mentioned AI and data center a couple of times. I'm assuming that's the market that you guys are most focused on today or is it different than that?
This is certainly the focus of the company today. We are going through a memory super cycle in AI. It's not only a question of having more bandwidth, it's a question of, you know, having the right level of power. It's a question of reliability of the systems. It's a question of interoperability of the systems. It requires a lot of investment, deep understanding of those systems, and intimate relationship with customers.
Mm.
That's where we position the company today. We certainly address other markets with our IP business, but the focus is really data centers and AI infrastructure.
Wonderful. With that in mind, right, and this is, this is directly related, why don't you tell the audience a little bit about the Rambus DNA. What is the heritage of the company, and why is it right place, right company, right time?
Our DNA is really memory subsystems. It's not about simply providing chips for the memory subsystems in AI infrastructure. It's understanding how memory interacts with the system. We've been doing this for the last 30 years. We understand memory subsystems really, really well. As I said earlier, it's about how do we see those systems evolve in terms of bandwidth, in terms of power management, in terms of reliability, and in terms of interoperability between the different chips. It's not something that is easy to reproduce.
Mm.
We typically develop solutions ahead of the market, several years ahead of the market, several years before they actually ramp in the market. We work closely with our customers. We work closely with the standard organizations. We have a pretty good view of where our roadmap is going. That system understanding of how memories operate in a data center is really our DNA. What we've realized over the last few years is that memory subsystems have become a critical point of success in AI infrastructure. It's actually a bottleneck, and resolving that bottleneck is what we know how to do.
Excellent. As it relates to, you know, obviously memory is going through a super cycle. I think we've seen some, you know, just amazing performance from all the, you know, traditional memory players in general. How are you positioned for sort of the key memory architectures for AI and data center? Just to name three, and you can talk about more if you'd like, or just keep it to these, but, DRAM, high bandwidth memory, SRAM, and the next generation CXL type stuff.
We actually play in all of these areas one way or another. You know, on the product side, obviously, we focus on DDR module solutions or chips for those modules. We see growth there, not only in terms of continuing to grow our market share, but we grow the content, you know, on those modules. Those modules are going to become more complex going forward. We do have several vectors of growth on the DDR side.
Mm-hmm
of the business. We also interact with other customers with our IP business, with our HBM IP, our PCIe IP. Anyone in the semiconductor industry who wants to develop a solution for AI infrastructure has, you know, the possibility of using our IP. That's really interesting for us, that IP business in HBM and PCIe or security, because we deal with semiconductor companies that build chips that are going to be in the market in two to three years from now. It gives us a very good insight of the trends that are happening in those markets.
That's excellent. As we think about sort of customer interaction, right? Traditionally, you could look at, you know, I'm able to talk about this more, but it's, you know, obviously SK hynix, Samsung, Micron, the storage players. That's been traditionally where you guys have had the most kinda interaction success. How are you thinking about sort of the pull side of things with the hyperscalers and working with them and interacting with them?
That's a very dynamic market. It's a market that is accelerating. Historically, you're correct. You know, our main interactions were with the three memory vendors and with Intel and AMD, because, you know, that interface chip that we develop sits between the memory modules and the processors.
Mm.
They have to be validated by both the memory vendors and by the processor vendors. What has happened over the last two years is that the hyperscalers have increased demands on the performance of those systems. They're actually pushing, you know, the market to move faster, we have interactions with them. They're also proposing novel solutions that we have to look at. Our interaction with the ecosystem has changed. We're not only dealing with the memory vendors and the x86 processor vendors, we're also dealing with anyone who develops processors based on ARM cores, for example, and we interact with our customers, namely the hyperscalers. There's a very deep ecosystem interaction that we are in today.
again, you get an early look at all that stuff just given the development sort of.
Yeah, we look at it early. We also take a leading role in JEDEC, which sets the standards for this because, you know, all of these products, whether they are ARM-based, x86-based, whether, you know, they're using different types of modules, they all have to work together. It's important for the industry to define those standards, and we do play a critical role in this standard setting to make sure that, you know, we participate in the definition of the roadmap. That gives, I would say, durability in our, you know, revenue outlook because we know what we need to develop ahead of time. We actually participate to the definition of that, and that gives our team a very clear roadmap into what they need to do in the next few years.
Got it. Just the last question on sort of the memory part of it, right? You're talking about durability of the revenue over time, part of it being driven by the stickiness of your relationship over longer periods of time and understanding things well ahead of when they're coming to market. I would say in memory, the things in memory architecture in general, the things that matter are reliability and performance probably over cost, right? If you agree or disagree with that, first of all. Second of all, if you agree with that, how does that dynamic benefit Rambus?
Reliability in a memory subsystem in a data center is not an option. I mean, not being reliable is not an option. You know, if you have an issue with that chip that sits between the memory and the processor, then you know, you have a system down or the workload doesn't work. The cost to our customers and customers is unacceptable. Reliability is very, very important. Therefore, our customers pay attention to you know, the trust that we have developed over years. They pay attention to the validation process, the quality that we put into those products more than they care about cost.
Mm.
That gives us a longer term, sticky relationship with the ecosystem, and that gives us also a way of maintaining, you know, price at, you know, at a reasonable level, because cost is important. I would say that's less important than, you know, the quality, reliability of the system, and the ability to develop this generation of products, generation after generation at a faster pace. That's really what's important for those customers.
Okay. You know, shifting away from a little bit and wading into AI data centers and you talked about system complexity. That's obviously a big part of the structural growth story for the company. One thing we've seen is AI is driving incredible innovation as far as the architectures for memory, right? What has surprised you the most about how quickly those requirements are evolving?
Actually, what has surprised me is how fast it has evolved over the last over the last few years. I would say over the last two years, we had to double the pace of rollout of our products. We had to expand our product portfolio, we also have to look at all the innovations that people are coming up with.
Mm-hmm
... you know, in terms of memory subsystems. That plays at the center of what we do. We actually like, you know, that dynamic that is happening now. It's putting a strain on our development teams. You know, they have to work faster, they have to work better. It also gives us a lot of opportunities in terms of the type of products that we can develop for the future. It gives us also some immunity against the cycles.
Mm.
These requirements are always gonna be here. The requirements for bandwidth, the requirements for low power, the requirement for reliability, the system understanding are always gonna be relevant going forward, and that plays at the center of what we do. The pace of change has been amazing over the last couple of years.
All right. I'm gonna pivot just a second away from sort of the main path here and just talk about, you know, your decision-making, how you manage the company. You know, obviously a lot of opportunities you could be seeding and thinking about. You're talking about the change and the pace of the change and how that benefits you. How do you think about where to allocate, you know, time, dollars, personnel? You know, how do you make that decision?
If you look at the history of Rambus, one of the first things we did, a few years ago is to actually stop the activities that were not related to the memory subsystems in data centers or AI infrastructure. That was, in retrospect, a very good move because of the acceleration that is happening in AI infrastructure. The first decision is to decide what not to do.
Mm-hmm.
We did that. When we look at what we wanna do, we wanna invest in products that have the potential to grow in the market and where we have the potential to take a leadership position. As a first approximation, what we're doing is we are developing products for every module solution that either goes into an AI infrastructure or data center or in a client system.
Mm.
What you've seen over the past years, two years, is product announcements in that direction. We announced power management chip for the data center. We announced power management chip for the client space. We also announced other companion chips for these two. All of these chips are defined by JEDEC. Again, that gives us the confidence that they are going to be adopted by the market at large. That's how we make decisions. In terms of prioritizing those decisions, time to market is really important. The validation process for these products is critical, and it's a long process. You wanna be first in market with the new products. What we look at is, you know, what are the products that are so critical in terms of time to market-
Mm
We give those products the priority. We decide what not to do. We decide to focus on standard products that are approved by JEDEC or defined by JEDEC, and we prioritize the product where we can be first to market.
Wonderful. you know, getting the train back on track here with the system conversation. Where do you see the biggest increase in content per system over the next several years?
Content per system, when the market moved from DDR4 to DDR5, we had an expansion of the content because some of the functions that were sitting on the motherboard in the DDR4 generation of products had to move to the module itself. Instead of having a buffered chip only in the DDR4 generation, we have a buffered chip and what we call companion chips. There's a content expansion just in the standard module. Going forward, you know, the market is going to adopt what we call MRDIMM. MRDIMMs are module solutions that double the capacity and double the bandwidth of systems.
That also multiply the content by a factor of three to four , because at those speed and those capacity, you need to add a lot of more chips than you currently have on a standard DIMM. That's a very nice, I would say, growth vector for us. As I said earlier, we're also addressing the client market, because in the client market, some of the challenges that we found today or we have found in the data center, we're gonna find them in the client systems as well. As speeds go up.
Mm-hmm
and memory capacity goes up, you know, some of the functions that we had to perform in the data center have to be performed in the client systems as well. That's how we see the expansion. You know, more chips on the current DIMMs, MRDIMMs, that is going to increase the content quite substantially, and the introduction of similar types of chips in the client system.
Wonderful. If we're talking about what's coming next, right? Which newer products or technologies should investors pay attention to, that aren't really fully reflected in sort of your guide and your financials today?
Well, today, you know, most of our revenue is coming from the licensing business and from the product business. Product business is actually today the largest business in the company and the fastest growing business. What is not reflected in our numbers yet fully in terms of potential growth is precisely MRDIMMs because it's going to be introduced to the market towards the end of the year. It's gonna be in full swing next year. The client systems as well. All of these new products that we have developed over the last two years that are actually in validation process in the ecosystem as we speak, are not reflected in the short term, you know, really in a meaningful way, revenues. They are going to contribute to the growth in the longer run.
Okay. How do you know, as you think about products and newer products, again. This is back to the management questions, just as a, as a core strength of the company, how do you balance investing ahead of the curve, right, versus waiting for the standards, and the platforms to fully form?
Well, that's a iterative process, if you wish. We do talk to customers and partners. You know, the customers are the memory vendors, the partners are the processor vendors. We talk to the hyperscalers as well. With them, you know, we have a view of what they wish their roadmap to look like. We also are part of the JEDEC standard organization.
Mm-hmm.
That's where we also, with the other members of JEDEC, define products. That's how, that's how we define the products and that's how we prioritize our products. As I said earlier, in terms of timing of rollout of the product, it's really about being first to market with the most complex products. If you look at the history of our product introduction for the DDR5 cycle, for example, the first product we introduced is what we call the RCD product, the clock recovery product on the module. It's the most difficult product to design, it's the longest product to validate in the market. It was important for us to be first with that.
We came with companion chips later, and we did this on purpose because we didn't want to miss the transition from DDR4 to DDR5, and we didn't miss that transition. When we moved from DDR4 to DDR5, we went from a 20, 25% market share to 40 plus % market share today.
Hmm.
It was the right decision. We then introduced, you know, our companion chips. In the companion chips, we invested heavily, in particular in the power management chip, because we believe this is something that is critical to the performance of the system after the RCD chip. It was important for us to build a team so that we have the know-how internally. We've introduced our first products, and with these first products, we have very strong and positive customer feedback, especially in the high-end. That's what we do. We pick the things that are difficult to make and that are really relevant for our customers. We wanna try to be first with those. That's how we develop our roadmap.
When I think about choosing what not to do and then thinking about the things you wanna tackle and how you're gonna tackle them being sort of critical parts of the strategy. You mentioned power management a couple times, right? That has not traditionally been an area where Rambus is focused as, you know, developing a sort of specific product. It's happening now, right? Obviously, competition is robust in that ecosystem with a lot of traditional players out there. How is Rambus well suited to compete and succeed in power, and how much would you say, you know, the growth and the future of the direction of the company depend on the ability to succeed in that specific vector?
Power management is different than power management on a module. You know, a module environment is a very stringent environment in terms of thermal requirements in a server, in terms of space, and in terms of how precise, you know, these power management, this power delivery has to be. It's more than being able to do power management at large. It's about being able to do power management for these type of memory subsystems. The power management chip interacts with the other chips on the module. The understanding of how the system works is also very important. We do understand modules. We do understand the interoperability of chips on the module. We understand the ecosystem because these are the same customers with the same validation processes. We have invested in power management several years ago.
We were silent about it. You know, we have built the teams. The first power management chip that we have released to the market. You know, we were not the first. You know, we came into a market where you had incumbents there. We've been designed in by all of our customers, especially on the high-end side.
Would you say that the way you've tackled power management is purpose-driven as opposed to reverse engineering with a product that already existed?
Absolutely. It's purpose-driven. This is also an area where we work with, JEDEC and the industry. AI infrastructure will require more bandwidthMore capacity, and this is not gonna stop. On the same type of form factor.
Mm.
The system requirements are going to become more and more complex. Having the ability to understand power management in that environment, clock recovery in that environment, system interaction in that environment is really critical. I think that as the infrastructure for AI continues to evolve and improve, it's gonna play on those strengths of ours.
I think it's clear connectivity, memory, power, you mentioned clock. Timing is a little bit of a dark art. How do you see that evolving for Rambus and, you know, how do you think you'll be playing that market?
Timing?
Timing, yeah.
The RCD chip is actually a timing chip. The challenge with these chips is, you know, you have memory signals at very high speed traveling, you know, very close to each other. There's a lot of cross-noise and these type of things. That's why, you know, on our earnings call, we talk about signal integrity. The key here is to be able to deliver clean signals between the memory and the processor.
Mm.
You know, in a very noisy environment and at a higher and higher speeds. That's signal integrity. That's actually timing technology.
Yeah.
In the future, you know, there will be timing requirements. You know, the first, you know, adjacent markets that we are addressing, as we said earlier, is the client system. In client systems, we're gonna have faster and faster signals traveling on lines that are very close to each other. That's gonna create noise. We will have to clean those signals. This is really an art.
Mm-hmm.
as much as, engineering work. That's why it's very difficult to reproduce. You know, this is things that we have learned over decades of work in that specific area.
Terrific. All right, a little bit of a hypothetical. You're looking out two to three years in the future. What would make Rambus look meaningfully different than it does today?
Well, I think we'd continue to have a broader portfolio of products. That's certainly something that is clear. I think we're gonna have deeper relationships with our customers and the hyperscalers in terms of defining what's going to be required for next generation of AI. These are two areas that are absolutely clear for us. There's certainly the capability for us to expand our know-how in signal integrity into other markets.
Mm.
Our know-how in power integrity into other markets. That's where we see the growth vectors above and beyond what's driven by the AI infrastructure. The AI infrastructure, you know, offers tremendous growth potential for us already.
Yeah. Yeah. Great. All right, well, look, I can't, you know, hand it off to the audience here without talking a little bit about your financial model. It's attractive. Rambus generates substantial free cash flow, right? Why don't we talk about some of your metrics and your performance in free cash flow? You know, anything that you're particularly keen for the audience and the investors to understand.
Yeah, we generate strong cash flows just because of the structure of our business, the combination of the IP business and the product business. Our number one, you know, priority is always to, you know, reinvest into products that are going to continue to fuel profitable growth into the business. This has been the case over the last couple of years. As I said, the roadmap has accelerated. You know, the pace of that roadmap has been, you know, doubling over the last few years, and the breadth of our product offering has been expanding as well. Our number one priority is to reinvest into our business so that we can continue to grow and to keep our leadership position there. You know, we look at M&A, of course.
That's always a vector, whether it is to expand or accelerate our growth, but also in terms of bringing the right resources internally. You know, as I said, we are specialized in power integrity, signal integrity, and it's important to have the right resources to continue to fuel that engine. Finally, we are returning cash flow to our investors on a regular basis. You know, say 40% to 50% of our cash flows is being returned to our shareholders on average. You know, we pick the right time to do this, but we are able to balance these three vectors.
Wonderful. A few minutes left. I wanted to see if anybody in the audience have any questions. We've got one right here up front.
Hello. I have one question. Compared to the CXL, I think the Rambus is more preferred for the MRDIMM to expand the memory. Right. Right now there's for some hyperscalers, they have some attractions to use a CXL to disaggregate the HBM and the computing. My question is like, firstly, like, I remember, you know, previously quarter quarters, you said that Rambus may be invest more in the MRDIMM compared to the CXL.
This thing will change it or not? Will you think about to develop any kind of CXL controller in the future? This is the first question. The second question is, since the hyperscaler is thinking about to use CXL to disaggregate the HBM to the GPUs or ASIC, will this impact Rambus HBM IP business? Yeah.
I think there were two questions there.
Two questions.
Yeah.
Yeah. First is, the first question was around CXL and MRDIMM. The initial idea about using CXL for memory expansion was the following. If you take a processor in a data center, you know, the processor has a limited number of memory channels. You know, the AI workloads require a lot of memory. Very quickly you saturate those memory channels. Once your memory channels are saturated with memory and you need more memory, the only way to do that is to have another processor with memory. That's not economically right.
Is that the memory wall that people talk about?
This is the memory wall that people talk about. One idea was to use the CXL interface, you know, to actually add memory. You know, that this is where it started. At Rambus, we played in CXL in two ways. One, we have a CXL IP controller that, you know, as part of our silicon IP business, that we sell to whoever wants to develop a chip in the semiconductor industry that has a CXL interface. The second way we play in CXL is we have developed our own CXL chip, CXL controller chip, but we've not commercialized that chip. The reason is that CXL defines an interface, it does not define a chip. It's not like JEDEC. What we see in the market is people who have a CXL controller chip. These are custom chips.
Customer A has a flavor of a CXL controller, customer B has another flavor, customer C has another flavor. For us, you know, the business model didn't really make sense if we had to develop a chip per customer. The market in aggregate is an interesting market, but it's a very fragmented market. If we wanna talk about memory expansion, the MRDIM solution is a more elegant solution, because you just remove a DIMM from the memory channel, and you plug an MRDIM instead, and you double the capacity and double the bandwidth with the same software infrastructure and with solutions that are being developed by JEDEC. That has been adopted across the industry and across customers. That's one reason.
We still play in CXL, but we've limited our play in providing IP, silicon IP, to anyone who wants to develop one of those custom chips, you know, for disaggregation, whether it's for HBM disaggregation or NVDIMM.
Okay, perfect. Any other questions from the audience? Okay. If not, I do have one. No. Yeah, I do have one. Back, we started with the big picture. Let's close with the big picture, right? If an investor, you know, puts their money to work on Rambus, and they check back in on that investment in five years, what do you hope that they will say the company got right?
What I would say, I'd like them to say that we understood how the memory subsystem evolved. You know, we anticipated that, and we invested early with conviction into the new memory subsystems. I hope they would say that, you know, we could, you know, balance our investment in a way that prioritize products where we have strength. I hope they would say that we have become a very strong player in the development of AI infrastructure because memory is critical to AI infrastructure.
Perfect. Look at that. Almost perfect timing there. Thank you very much, Luc. Appreciate your time today. Thanks, everybody. Oh, we got one more question in the back of the audience. Sorry about that.
Sorry to keep everyone here. Can you help us think through DIMM growth? Like, there's a lot of moving parts on server growth and DIMMs per channel and just what you guys view the DIMM growth market as and how you should perform in that. Just a second question for MRDIMM, just how we should think about the mix of that in the ecosystem in like 2027, 2028.
Yeah. For the DIMM growth, what we quoted, as I said, there are several factors entering the DIMM growth. There's the server growth. We quoted in the last call, you know, the 8% growth from Gartner, which is on the high end when you look at these analysts. We believe that DIMM growth can be higher than that. There are several factors coming into it. One is the number of memory channels per CPU. It moved from eight to 12, it's gonna go to 16. That increase the DIMM growth. You have to balance that with the capacity on each one of the DIMMs because you can increase capacity just by increasing the capacity on one DIMM, not only add, you know, adding DIMMs.
You have to balance also, you know, traditional servers versus AI servers. All in all, you know, we believe the DIMM growth is going to be higher than this 8% that we quote. It's probably going to be double digit, but that's something we continue to track on a regular basis.
Okay. All right. Well, unfortunately, we are out of time. Thanks, very much again, Luc.
Yeah.
Appreciate it. Thank you.
Yeah. Thank you.
All right.
Yeah.