Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

Status Update

Feb 18, 2025

Operator

Good day, and welcome to the Loop Capital Conference with Rambus Inc., a deep dive into next-gen MRDIMM server memory. At this time, all participants have been placed on a listen-only mode. It is now my pleasure to turn the floor over to your host, Gary Mobley. Gary, the floor is yours.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Thanks, Paul. Thank you to everybody who joined us live and into the future who access this webcast on replay. Again, my name is Gary Mobley. I'm the covering analyst at Loop Capital for Rambus Inc. And from Rambus, with us today, we have Steve Woo, who is a Fellow and Distinguished Inventor at Rambus. We also have VP of Strategic Marketing, Matt Jones, joining us today as well.

If you want to ask a question during this webcast, you can send me an email at gary.mobley@loopcapital.com, and I'll be sure to read those questions at the end of this presentation. For the mechanics of the presentation in the webcast, I'm going to turn it over to Steve, who will go through a series of slides and give an introduction into next-generation data center memory, and then double-click down into MRDIMM. With that, let me hand it over to you, Steve.

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Well, thanks very much, Gary, and thanks everyone for joining us here today. Yeah, today I'm going to be talking about MRDIMM, and the way the presentation is going to go is I'm going to start by talking about some of the big motivators for why we need a solution like MRDIMM, and I'll talk about some of the problems that are there, and then I'll talk about what MRDIMM is and how it addresses those problems, so on this first slide here, here it goes.

Yeah, so one of the big drivers these days across all markets, of course, is AI, and this slide here really shows what the challenge is for systems that want to provide high-quality AI, so this graph is showing on the Y-axis, it shows DRAM bandwidth, and on the X-axis, it shows the DRAM capacity. What's plotted here are some of the most important and most accurate AI models out there. These are some of these large language models like, you know, ChatGPT, Megatron, Turing. These are, again, some of the leading-edge AI models that are out there.

What this graph shows is that as these models get bigger, they need both more memory capacity as well as memory bandwidth. You see this interesting relationship that as we get to bigger and bigger models, which are more accurate, both bandwidth and capacity are required in order to deliver high-quality AI. Now, these models continue to get larger, and we're seeing that they're getting more sophisticated as well. This relationship is continuing between bandwidth and capacity.

These advances that we're seeing in AI every few months are dramatic, and they really are fueled by more sophisticated models that, again, need both more memory bandwidth and more memory capacity. Some of the future kind of AI that people are talking about, just this drive to get to be more human-like intelligence, it's, again, requiring even more memory bandwidth and memory capacity. And so we see no, you know, no stopping in this trend.

So on this next slide here, I'm showing what happens when we produce AI models. And it turns out there's a pipeline of processing that happens as we go from data to our trained models. On the left here, you can see the storage system. So these models typically have, you know, they can have as much as petabytes of data or more. All this data doesn't really fit on one computer, so it's stored in a storage subsystem that's usually hooked to a network, and that data can be moved into the rest of the training pipeline.

In the middle here, what you see is a bunch of servers that do a very interesting job. It's data preparation. What will happen is these servers will request the data, and it will organize it in a way that the AI training engines can then use and produce their models. Now, it turns out that the data storage subsystem, there's a lot of main memory that's used in these systems. Now, it's typically less than a terabyte, but that main memory is DDR memory. The data preparation engines that are kind of in the middle there, it's a mix that you'll see.

So you'll see some traditional servers that are using regular DDR memory, but you'll also see some accelerators that are in that pipeline as well, and they'll sometimes, you know, have GPUs and things. So you can see that the amount of memory that you typically see in these kinds of servers is typically around a terabyte of DDR memory in those servers, and then the GPUs typically have less than a terabyte.

When we're done grabbing all this data and getting it ready to be trained and to produce our AI model, we then send all that data over to the training engines, which are shown kind of here on the right. These are some of the incredible workhorse processors that you see from companies like NVIDIA and AMD. And of course, they have a lot of HBM memory, this GPU memory, typically about a terabyte per engine today.

What's also interesting is that coupled with these AI training processors are some servers, some x86 servers typically, so servers from companies and processors from companies like Intel and AMD. Those CPUs are packed with memory. I'll show you in a minute just how much memory, but you know, typically the amount of DDR memory that can be there in these servers that go along with the GPU engines. They can have anywhere from 2 TB-8 TB of DDR memory.

And then what happens after we train, and the process of training can take weeks on some of the biggest models. At the end, we get models that are trained, and we can then use them for inference. Some of that inference can happen in the data center. Some of that inference is done outside of the data center. And sometimes these models are compressed to make them a little bit smaller. It really just depends on the type of device you're trying to run on.

But the key here is that the whole training pipeline requires a lot of very fast memory, both HBM and DDR, in order to produce our inference models, which then do inference everywhere across the data center to the edge and the endpoints. What's most important to realize is that this relationship between memory bandwidth and capacity that I talked about before, increasing those means you have larger models that are more accurate, and you see better quality results, and that trend really just isn't slowing down.

And what we also see is that, you know, we hear a lot about the high-performance HBM memory that's used in the GPUs for training. It's very important, but what's also important is the role that DDR plays throughout the entire training pipeline. And so I'll talk a little bit more about what, you know, what the role of DDR is inside the training engines themselves, but it's just important to realize that DDR is used all throughout that pipeline.

All right, in the next slide here, I'm showing what goes on inside, you know, kind of your typical AI training server. So on the left here is a rack of servers, and this is very typical from companies like Supermicro, Dell, and others. They have this, this is kind of the standard workhorse configuration that you see today. Inside this rack, you'll see eight of these separate chassis here, and these are, they actually have a lot of compute inside of them. Half of this box is consumed by GPUs, which you see on the bottom here.

And those GPUs, those boxes typically have about eight NVIDIA or AMD training engines. In this particular example, I'm showing the NVIDIA H200 GPUs. And there's about a terabyte of HBM3 memory in there, and the total memory bandwidth is about 38 TB inside. But what's also coupled with these eight NVIDIA GPUs is a dual-socket x86 server, which you see here on the right. Today, you know, they can be dual-socket Intel Xeons or dual-socket AMD EPYC CPUs, and they have up to 8 TB of DDR5.

And right here in green, you can see these DDR memory modules. And the way that they fit into this whole motherboard here, which you're seeing a top-down version, is these black areas are where the CPUs go. And then these black and blue kind of vertical stripes, those are the sockets that we plug our DIMM modules into. Now, with AI demanding more memory bandwidth and more memory capacity, it would be tempting to just say, well, you know, things look great. Why don't I just have more sockets to put more memory modules into? That way I can have both more capacity and more bandwidth.

The problem is that there's just no space to put them. We've actually run out of room, and you can see that they're very tightly packed. There's just no more room on this board. You might think, well, what if I just placed them really far away? And that's one of the challenges is that it's very difficult to get the memory modules to work at a high speed when they're far away. So they do need to be close. So what we find is at a time when the industry needs more memory bandwidth and more capacity, it's challenging to do it the old-fashioned way by just adding more modules.

Instead, what you have to do is you have to figure out, well, with the available sockets I have and the available DIMMs, how do I allow each one of those DIMMs now to provide more bandwidth and more capacity because I'm so space-limited? The other challenge is that it's very difficult to enable a new solution. It's expensive, takes a lot of time. And so the desire here is to not only figure out how can I allow each DIMM module to provide more memory bandwidth and more capacity, but how do I do it while still leveraging the existing DDR5 infrastructure so we don't ask the industry to spend a lot more money and time enabling new solutions.

And so that's really, you know, kind of the one of the key challenges, at least for AI, is the space constraint on the CPU side. I've talked a lot about AI, but it's really not just AI that's seeing this kind of challenge in terms of memory bandwidth and capacity. So this was a slide that was presented by Google at MemCon in 2023. And what it shows is just for their regular cloud workloads that they're seeing memory bandwidth limitations. They, you know, they kind of talk about it here in the slide, and I'll walk you through here.

Basically, what they're saying is that the number of cores that they're putting onto CPUs is growing so fast that the memory system really can't keep up. Each one of those cores needs its own capacity and bandwidth. What this scatter plot shows here, you kind of read the upshot of it, is that it turns out that the memory subsystem, the bandwidth is utilized at a very high rate, but the CPU cores are utilized at a much lower rate. What that means is they're just running out of bandwidth to support this high core count.

Obviously, they'd like to be able to utilize their cores and the CPUs at a much higher rate. The way to do that, as they point out in this slide, is they need more memory bandwidth. Of course, you need more capacity to go with it to support those cores. The rising core counts that we see are really a driver now for needing more bandwidth. Again, same issues. Where do I put those DIMMs? I can't just add more DIMMs, so I need to be able to get more out of each DIMM.

What's really amazing, if you think about the core counts, is, you know, the previous generation of CPUs from Intel were up in about 60 cores per CPU. Today, they're already over 100 cores, and there's projections for the near future generations to be over 200 cores. So that core count is rising dramatically, and we've got to be able to get more bandwidth and capacity out of the memory system.

All right, so let me kind of do a quick, you know, intermediate summary here. So the big drivers now for bandwidth and capacity are advanced AI and really, you know, the drive to more human-like reasoning, agents, and robotics are driving the need for more bandwidth and capacity. The CPU core counts are growing dramatically as well. This is a die photo of an Intel CPU with 60 cores. And, you know, I mentioned that the modern cores, the modern CPUs have more than 100, and there's a path to more than 200 in the near future.

And on the right, again, physical constraints are placing challenges on how we deliver that bandwidth and capacity. There's just no room to add more of the traditional DIMs. So the question is, how do I allow them to provide more bandwidth and capacity, and how do I do it by leveraging the existing infrastructure that's there? And the answer is MRDIMM. MRDIMM is able to provide more memory bandwidth and capacity, and it's able to leverage the existing DDR5 infrastructure that's there.

So let's take a look at what MRDIMM is and, you know, kind of how it's very similar to the existing registered DIMMs and how it's a little bit different. So on the left here, you see the front and the back of an MRDIMM, and on the right here, you see the front and back of an RDIMM or registered DIMM. These are the, you know, modules that are typically available today, and MRDIMM is the new kind of DIMM.

What you notice is that they're very similar. They have the same kinds of components. They have DRAMs, they have PMIC, and they have an SPD hub and temperature sensors, and then these little squares here are the DRAMs. What's really interesting about it is the registered DIMMs and the MRDIMMs use the exact same DRAM. One of the big differences is in these chips down here at the bottom called the MDBs. There's 10 of them, and they're data buffers, and I'll get to it in a minute what they do.

But their chief function on these data buffers is to multiplex the streams of data from two different DRAMs onto the same wire back to the host. So on a regular DIMM, we have one DRAM that's transmitting on a wire at a time. On MRDIMM, you have two different DRAMs that are transmitting on that wire, and their data streams are multiplexed into a stream that's twice the data rate by these data buffers, and so what that allows us to do is now have twice the bandwidth going back to the host.

These 10 DBs are new components that are on the module. The MRDIMMs, they use also a chip in the middle here called a Registered Clock Driver. Now, it is a little bit different than the one that we've seen in the past on the registered DIMMs, and that's because it has to be simultaneously talking to two different sets of DRAMs in order to allow them to multiplex their data back onto the data bus, but in spirit and in function, its role is very, very similar to the Registered Clock Driver that you see on RDIMMs.

There's also a PMIC, an SPD hub and temperature sensors. It turns out that SPD hub and the temperature sensors, they're the same ones that are used on both RDIMMs and MRDIMMs as well. And we have a new PMIC, which I'll talk about in a second, and that new PMIC allows for the higher power that you'll see from including these multiple data buffers as well as talking to multiple DRAMs simultaneously. Rambus provides a complete chipset, meaning we provide all of these components together, and when we do that, we ensure their compatibility. It makes it a lot easier for an end user to integrate and be sure that they're all going to work together.

All right, I want to talk again, I'll just kind of reiterate sort of what the role of these DBs are, these data buffers. Again, when I have today's existing DDR5 DRAMs, what I can do if the DRAMs are each transmitting at 6.4 GT/s or Gbps , if I allow two of them to be simultaneously transmitting and I multiplex their data onto the same wire through the data bus back to the host, I can then have that wire running at double the DRAM data rate. I can have those wires running at 12.8 GT/s or 12.8 Gbps back to the host.

What I'm doing really is I'm just operating different DRAMs in parallel and multiplexing their streams again onto those wires. This is the way that we provide both more capacity because I have two sets of DRAMs that are simultaneously talking and more bandwidth on the wires because I multiplex them using these data buffers. So the exact same DRAM, but I'm just operating them in a slightly different way. I'm controlling them a little bit differently with the MRCD, and then I'm using that multiplexing capability in the data buffers.

So the benefits of MRDIMM, again, we're able to get greater bandwidth per channel through this multiplexing capability. It turns out that the DDR5 DRAM roadmap will max out at 9.2 GT/s , 9,200 MT/s . But it turns out MRDIMM is able to extend that roadmap because I'm allowing two different DRAMs to multiplex their streams onto the same wire. So MRDIMM will start at 12,800 MT/s and follow on generations will go even higher.

They'll be able to use the end of the roadmap, DDR5 DRAMs at 9,200 MT/s and double that to more than 18,000 MT/s . And again, the host bus back to the processor is really running twice the native speed of the DDR5 DRAMs. We're able to get greater capacity per channel as well. So I'm showing a couple of different configurations here. This is the standard form factor at the top, and then there's a tall form factor on the bottom. So the way we can get higher capacity is the tall form factor can have twice as many DRAMs on it.

Another way we can do that is we can actually pack more DRAMs into one individual DRAM package and put it in the standard form factor. So basically, there's more pieces of silicon in each of these little black squares, those packages. MRDIMM supports what we call four ranks by doing this. So you can either have four ranks in this tall form factor or you can have what they call these dual die packages where you're putting two pieces of silicon into each package, and you can support four ranks that way in the standard form factor.

What's really nice about this solution is, at least from the DRAM side and many of the other components, it really leverages the existing DDR5 infrastructure. So it extends the lifetime of DDR5, which is great, and the memory manufacturers and the industry have just gone through a process of spending a lot of money and time to enable this infrastructure. So it allows us to leverage the investment, and it also provides a seamless upgrade in terms of memory bandwidth and capacity.

It turns out that the same motherboards where you're plugging registered DIMMs into, you can use the same sockets to support MRDIMM. So in the future, when you have a system that supports MRDIMM, you'll have the option to plug in either MRDIMM modules or regular RDIMM modules.

All right, the last slide here, I just wanted to kind of summarize with Rambus's offering. So Rambus announced the industry's first complete chipset for both, you know, for the next generation DDR5 MRDIMMs for data centers and AI. So we have the industry's first MRCD and MDB, which are the key components on the module that talk to the DRAMs, and they enable the MRDIMM to run at 12,800 MT/s.

We also announced the next generation server PMIC for both the MRDIMM and the highest speed registered DIMMs, so they can use that same PMIC. We also have designed it to include advanced clocking and control and power management features that are needed for the higher capacity and higher bandwidth modules. You know, operating in the DIMM environment is very difficult. There's very little room between the DIMMs, and we're very space constrained, meaning power delivery and thermals are challenging.

So we've developed a number of innovations in order to be able to do that. The benefits, again, enable very flexible and scalable end user configurations with different configurations and compatibility across server platforms. And finally, it really does meet the needs for that unending demand for higher bandwidth and capacity and really higher memory system performance that you see in data centers and future AI workloads.

So that's the end of the kind of prepared material. I think, you know, here, Gary, you know, we can chat a little bit more about some of the specifics if you'd like.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Thanks, Steve, for that overview. Every time I hear you speak, I learn something new, so I appreciate that. If I'm not mistaken, the JEDEC standard setting body may have ratified MRDIMM in July, so in what way has that made this technology more valid, and in what way has it, you know, in what way is it going to allow for broader industry adoption?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, yeah, so really, I think there's a couple of really big things. I mean, first of all, just standardization among a lot of companies is important to getting the adoption. So, you know, in JEDEC, there's participation by really all the major, you know, processor companies, and, you know, coming together to develop a spec that meets the needs of each of the big players is very, very important. And so that alone is a huge step towards getting broad adoption.

I think the other thing that was really important in the development of that spec was making sure we were able to leverage the existing infrastructure from DDR5. So really, the key is, you know, one major piece of that adoption is just making sure that people aren't having to spend a lot of money to, you know, to develop a new solution when there's a path to leveraging what's already there. It just makes it a lot easier. So those couple of things are really keys to what JEDEC was able to do and the JEDEC member companies were able to do to enable MRDIMM.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. Speaking of industry adoption, I believe there might be two server SKUs in the market today, one from Supermicro, one from Lenovo, that utilize MRDIMM or maybe MCRDIMM. Maybe if you can just kind of speak to, you know, the early days adoption from a process roadmap perspective, you know, or perhaps from a different perspective, when might we see the industry really take off based on different Arm or x86 server generations?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, that's a good question. So, you know, as you pointed out, there is this what people now call kind of the first generation MRDIMMs, but really the original name was MCRDIMM. And that was a proprietary implementation. It was really to, you know, just kind of show the viability of this type of approach. You know, it's, at least right now, it's more, I think in some ways, you know, more just kind of proving the concept that you can do t his.

Really, when you get to the JEDEC stage where it's more open and there's a lot more companies that are participating in the definition and making sure their needs are being taken care of across the range of platforms they want to support, you know, then, you know, that kind of thing is really the next step towards getting, you know, kind of a broader or, you know, larger scale adoption of the technology.

Right now, you know, what we've seen is both Intel and AMD, they made announcements of their next generation platforms. Intel in particular has announced their what they call the 6th generation Xeon or, you know, the code name, people used to call it Granite Rapids, but, you know, it'll be called the 6th generation Xeon, and then AMD has their 5th Generation EPYC processor. It used to be called Turin, and so both of those have been announced for, you know, kind of ramping in 2025.

You know, what we think, you know, is kind of the, you know, the kind of, you know, platforms or intercepts, you know, the kind of the follow-ons to those platforms, you know, are probably about the right timeframe for this technology, and we're certainly readying, you know, our technology for, you know, to be available for those types of platforms.

You know, I think, you know, if that's true, you know, if this is kind of how it works out, then yeah, you know, you'd kind of expect to see announcements happen maybe late this year, maybe early next year, and then you'd expect to see the platforms come to market, you know, as early as next year.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. I guess with any server-related memory or storage, there's always this debate. Do advancements in memory bring about more efficient usage of the memory, or has application performance been starved to a point where we simply remove that cap and application performance in the available market for memory stays the same?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, I mean, it's a good question. Really, MRDIMM was developed to address some of the performance needs that were really bottlenecking some of these applications. So I think, you know, I think from the slides I've shown and shared with you, you know, what we're really seeing is all across the board, bandwidth and capacity need to increase. And a huge driver of that right now is AI, but cloud workloads as well are seeing this demand.

So really, the, you know, the processors in both AI and just regular cloud workloads are being bottlenecked by insufficient bandwidth and capacity. And so I think it's really less of an efficiency kind of play where you're trying to, you know, reduce the number of components that you need and more about unlocking the real value that's in those processors and unlocking the capability to produce better AI that's really driving the need for things like MRDIMM.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. I don't know if this is a question for you or Matt, but you know, how deep do you see the penetration for MRDIMM going? Do you see a one-for-one, you know, transition from today's RDIMM implementations, you know, to a full conversion into MRDIMM?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Matt, do you want to jump in here?

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Oh, go ahead, Steve.

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, no, you go ahead, please.

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Yeah, I was going to say that, you know, just to jump in here, that certainly it's going to depend on workloads, as Steve said. The unlocking of the advanced AI and cloud workloads is going to drive the adoption and penetration, and the, you know, that unlocking and enablement here will, you know, further drive the workload development to leverage that. I think they're going to be very complementary going forward.

There'll be an array of solutions that'll be offered from, you know, what we call standard RDIMMs today are becoming, you know, very high performance, but we can call them standard as we have MRDIMM come in. And it'll be a range of, you know, cost and performance spectrum from the RDIMMs that go with each of the generations of processors through the MRDIMM. So it remains to be seen as we venture out to, you know, introduction and adoption. But I think we're going to see them coexist in the marketplace given some of the cost and performance trade-offs.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. And I guess related to that, I have a follow-up question. You know, in today's x86 server processor landscape, you have anywhere between eight and 12 memory channels supported, two DIMMs per memory channel. Because of the increased bandwidth and capacity that MRDIMM brings about, do you see those DIMM slots being any more or less populated because of the bring-on of MRDIMM?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, I don't know, Matt. I don't know if you want to jump in on that one or you want me to.

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Yeah, no, as Steve said, you know, really the reason that drove us to MRDIMM as a solution was to, you know, get those increased memory subsystems with increased bandwidth and capacity. It's not an efficiency play, as I think he touched on, you know, just earlier here. So I think what you're going to see is that there will be, you know, a good deal of continued growth in, you know, DIMMs per boxes, those memory channels increases, as you touched on, Gary.

As we, you know, cap out on the current generation, say at 12 or 16 channels perhaps per CPU, you know, they will be able to extend the capacity and bandwidth through MRDIMM. And I think on aggregate, it will be a mix, but we'll continue to see, you know, from today forward, more DIMMs per box, and MRDIMM is a piece of that. I don't think it's going to necessarily drive us backwards in a sense.

You know, Steve likes to often cite the fact that, you know, in the early days of multi-core CPUs, we were told that, you know, we were going to see less CPUs because of the multi-core phenomenon. We haven't seen that. We've just seen more CPUs with more cores, and memory has lagged behind. And so this is a step with MRDIMM to try to close that gap or knock down a bit of that memory wall.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. I think in the early days of MCRDIM, Steve, to your point, it was more of a proprietary technology. I think it was supported by Intel and maybe SK Hynix in Renesas in its first iteration. But, you know, now that we see MRDIMM, you know, coming into the standard setting body, JEDEC, it's becoming, you know, adopted more broadly in the industry. It's obviously, you know, embraced by you with your October, you know, chipset introduction for MRDIMM.

And so my question to you is, you know, given sort of how this market has evolved, how do you see it affecting the competitive landscape between Rambus, Montage, and Renesas, which are the three primary players in these memory interface chipsets?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

I mean, I think the thing that we're, I guess, happiest about is that we were really first to announce the support for MRDIMM with our chipset, so it's always great to be first, you know, to have the technology available and to have it ready for qualification and launch, and so, you know, it's a market where, you know, similar to the regular RDIMM market, there's three suppliers and, you know, then there's a few, you know, major customers, but there's broad support throughout the industry for a number of different players.

And so, you know, I think you'll probably have, you know, some similar types of dynamics between, you know, the companies that are supplying the chipset technology and all that, but we're just happy that, you know, we were the first to announce and we're happy that our technology is really ready to go to these platforms.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. From a technical standpoint, the MRCD, the Registered Clock Driver , is presumably more complicated to develop, probably a lot more advanced analog and mixed- signal design challenges involved there. The PMIC would be, I guess, sort of an ante to compete in the market, you know, having a solution there as well, and then the data buffer. So, you know, what's unique about Rambus in those new chip designs that allows you to compete better vis-à-vis the two competitors that I mentioned earlier?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, that's a great question. So really, you know, probably the most challenging part of, you know, among the many challenges, really, probably the most challenging part was the design of those data buffers. The chief reason is because we're signaling. We're basically taking in two data streams at 6.4 Gbps , and we're producing a data stream at 12.8 Gbps back to the host. So much higher data rate. And that, you know, that portion of the design alone is very, very challenging.

There's a lot of different kind of physical effects that start to happen as you go to higher and higher data rates. It's kind of analogous to when, you know, planes first approach the sound barrier, for example. You know, there were a lot of physical effects that people had to deal with that they hadn't dealt with before. Same thing happens in electrical signaling that as you go faster and faster, there's more effects to account for. It turns out, though, that, you know, Rambus's long history of high performance and high speed signaling was very valuable here.

This is not the first time we've been at these data rates, and so we've had experience with, you know, going not only on memory channels at very high data rates. We also have experience with doing just signaling over wires with things like our SerDes experience as well. So we were able to really lean on that a lot to help us in the design of those data buffers.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay.

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

That isn't the only chip, though, that changes, like you mentioned, the MRCD. It's got to talk to more chips simultaneously, and so, you know, that turns out, you know, it is a challenge. It's not maybe as difficult as the going at the 12.8 Gbps data rate of the data buffers. But, you know, also, you got to deliver power to all these things as well. And so you can imagine that on those modules, because there's more components and they're running faster, they're going to consume more power.

And so the design of the PMIC was also critical. And then really, you know, it's challenging enough to produce components that do each of those things. But when you talk about the space and thermally constrained environment of a DIMM, that just adds to the challenges. And so we have a long history in working with DIMMs. And so we really understand a lot about the environment there and how to make it so that all those components can play together really well along with the DRAMs.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Got it. Okay. Well, to bring it home for investors, I suppose it makes sense to talk about how this MRDIMM market changes the serviceable market for the memory interface chip business that you've successfully run over the years. And so from a bill of materials perspective, how does an MRDIMM module, you know, compare to an RDIMM module from your perspective as far as, you know, the value of all the chips that you provide?

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Yeah, Gary, that's a great, great question. What we've talked about in, you know, recent calls and the way that we see this playing out, it certainly is going to be a market that, you know, is going to take a little bit of time here to develop. But as we project forward, you know, with the upgrade of the RCD function to MRCD, the enhanced capability and value for the PMIC, the other SPD hub and temperature sensor content, and then the addition of the MDB for the buffering, we think conservatively this is 4x the content of an RDIMM as we step up from R to MR in terms of the Rambus opportunity and content per DIMM.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. And as we talked about earlier, you're probably not going to see a one-for-one offset between MRDIMMs and RDIMMs. You'll see a hybrid usage into the future. And so, you know, considering those different factors and the bill of material increase, what do you think it does for your serviceable market ?

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Yeah, that's a great question. And certainly with the caveats of, you know, we talked about penetration and adoption. You know, we'll begin to see this impact the served market, as Steve talked about. We expect on the follow-on generations to the 6th Generation Xeon and 5th Generation EPYC that Steve talked about, the follow-on generations are likely to be the intercept point. So beginning in 2026.

But as we see this ramp into the, you know, what we think are reasonable penetration rates and with those content uplifts that this could represent $600 million-$700 million of additional market opportunity beyond the RDIMM, you know, a couple of years deep into the volume production of MRDIMM.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. And so if memory serves me correctly, the baseline for RDIMM SAM is roughly $1 billion, correct?

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Yeah, plus or minus. That's a reasonable timeline given, you know, I think we talked about it being, you know, closer to $725 million-$750 million in 2024 and growing nicely. So depending on the timeframe, Gary, the plus or minus a billion in that same timeframe is probably reasonable when we see MRDIMM impacting, which I think is what you're getting at.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. The entirety of our call so far has focused on that 40% of your revenue that comes from products or memory interface chip sales. But there's another aspect to your business too that's $100 million plus to your business. That's your silicon IP business. And so, you know, my question is, is there an opportunity to license controller IP in that silicon IP business specific to MRDIMMs?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, I mean, so a good question. So, you know, as I guess the audience is familiar, we do a lot of controllers, but it turns out our controllers are really, really separate from the server class kind of DDR5 memory controllers that exist in the CPUs today. And so, you know, it's, you know, the opportunity isn't necessarily there, but what it's more interesting is that, you know, we've mentioned that these, like AI systems, for example, they have a lot of different kinds of memory and a lot of different kinds of processors.

And what we're seeing is that when one of the memories gets faster, all of the memories have to get faster. So what we're really seeing is we're seeing much more demand for faster controllers as well on the AI side of our business. So things like HBM and GDDR controllers have to get faster as well. And so we've done previous announcements about our HBM and GDDR controllers. And, you know, there we are again focusing on being, you know, early to market as well as at the highest data rates available because we know that's the direction and that's what the customers are asking for.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. The third leg of the stool in your business is the patent licensing business. So does the technology development on MRDIMM affect the leverage in those patents when coming up for license renewals, let's say five to 10 years down the road with the existing licensees?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, I think what I'll say about that part of our business is that, you know, I'm on the technology side of the business. And so there are a lot of challenges to solve. And so when we do innovate, we do make sure that we protect those innovations, you know, with patents and we add them to our patent portfolio that licensees get access to.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. Now, what we haven't talked much about is Arm-based server compute, which arguably is on the rise. Clearly, Arm-based servers for the purpose of general purpose computing is gaining share against the x86 community. In some respects, it's being driven by the hyperscalers promoting, in the case of AWS, processors like Graviton. And in the case of NVIDIA, you know, for their Grace platform, which will accompany Blackwell in front-side compute in the GPU appliances they intend to sell into the future.

So in what way does that, you know, affect not only your opportunity in MRDIMM, but more broadly your memory interface chip business?

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Yeah, I mean, we, you know, we're big fans of the Arm ecosystem. There's a lot of nice processors that are out there. And, you know, we see the Arm-based CPUs as, again, key to driving innovation, you know, in data centers and needing a lot of DDR memory. So it turns out that really with one exception, which is Grace that you mentioned, it is kind of a more specialized and, you know, kind of lower volume, you know, kind of application of CPUs.

With that one exception, you know, all the other, you know, kind of Arm-based CPUs that we've seen are using DDR. And so Graviton, like you mentioned, is an important kind of adopter of DDR5 technology. And, you know, they leverage what the industry is doing for DDR-based solutions. So I think, you know, the key is that with the standardization in JEDEC, things like MRDIMM become available to really everyone.

And there, you know, there've been a lot of companies that have participated in the definition. And so, you know, we're, you know, the Arm ecosystem has access to this technology as well because it's been developed through JEDEC. So, you know, just like the x86 community has access to it, you know, so will the Arm-based community.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

Okay. We covered a lot in 45 minutes into this. This is probably a good stopping point. That is, unless you have any additional comments that you want to make. If not...

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

None for me.

Gary Mobley
Managing Director and Senior Analyst of Semiconductors, Loop Capital

If not, I will extend again a thank you to you and Matt for allowing Loop Capital and myself to host this event. And I also want to extend a thank you to all those who joined live and those who, you know, replay this webcast in the future. And so with that, we'll go ahead and conclude. And thank you all.

Steven Woo
Fellow and Distinguished Inventor, Rambus Inc

Thanks, everyone.

Matt Jones
SVP and General Manager of Silicon IP, Rambus Inc

Yeah, thanks, Gary. Thanks, everyone.

Operator

Thank you. This does conclude today's conference. You may disconnect your lines.

Powered by