Mix going from a high growth, moving towards a high growth, product-driven revenue, but still supported by highly profitable and stable patent license business. For my first question, Luc, I'd love you to give us a quick update on how you see the business today and what are the key strategy, your strategic focus going forward as well.
Thank you. Yes, so Rambus was founded about 35 years ago, and it was founded on the development of technologies that are critical to interfacing processors with memory. That's how we started. That's why we call Rambus. And our business model 35 years ago was pure patent licensing. So people who wanted to develop memory interfaces, whether they are memory vendors or processor vendors, had to have a license with Rambus to develop those technologies. But over time, we've evolved the business, as you say, to developing technologies for the market. So we started by developing silicon IP blocks. And the business model there is that we develop blocks of silicon IP in the same area, memory interfaces, high-speed connectivity interfaces, and security. And we sell those IP blocks to semiconductor companies, and semiconductor companies use these IP blocks into chips that they then sell to their customers.
That's the first step towards technology. The second step towards providing technology to the industry was to develop a pure product business as we develop chips for memory interfaces. That's where we started. Those chips, it's the same concept. They sit on a memory module, and they are in charge of the interface between that memory module and the processors in data centers. You see data center is really, really important for us. High-speed memory interface is really important for us. We have these three ways to go to market today. Now in terms of growth and the role of each one of these businesses, the patent licensing business, which was the original business of Rambus, continues to provide a stable flow of high margin revenue. It's between $200 and $210 million a year. It's 100% margin.
It allows us actually to fuel our R&D efforts into developing technologies. The silicon IP business is also a high margin business, above 90% of margin. It's a business that last year was around $120 million of revenue. That's a business that grows 10%-15% a year. Finally, the product business is a business that we started in 2016. We had zero revenue. Last year, we had $250 million of revenue. This is a business where we see most of the growth. The margins are between 60% and 65% in that business. That's a business that grew really, really fast. This is a business that went through an industry transition from DDR4 memories to DDR5 memories. In the DDR4 memory generation, we could grow from 0%-20% share. As the market transitioned to DDR5, we actually could capture about 40% share.
So very high growth there. So it's a good combination of technologies, of ways to go into market. Now, going forward, we will continue to see high growth from our product business because there's an acceleration of demand on technology in the data center in particular. So we have to develop products faster and faster. There's also an increase of content on the memory modules that we play into. So there are two vectors for growth in the product business.
Really good. I talked about how you evolved your product business, and you talked about the three different ways you go to market. You also mentioned on the last earnings call that your addressable market is going to double in the next few years. So I want to start there and kind of double-click on the data center and the AI opportunity. First, let's talk about product, OK? And with the transition into inference, there's going to be continued demand or increased demand for not only high-performance memory, but high-capacity memory as well. And as a result, enabling signal and power integrity becomes really, really critical. And how does that impact the market opportunity, as you said, for your memory interface chips? Maybe we start there.
We start there, yeah. So we talked about the transition from DDR4 modules to DDR5 modules. So on the DDR4 module, you only had memory and what we call an RCD chip, which is the chip we make. And in that generation of products, we had to develop one RCD chip every two years. Every time the speed on these memory modules were increasing, we had to develop a new chip. And that's how we grew from 0%-20%. When the market transitioned to DDR5, there are more chips that you have to have on the memory module. Some of the functions that were performed on the motherboard had to transition onto the memory module. And these functions are temperature sensor, a controller, and power management. And that allowed us to address a larger market.
If you take the original market for RCDs, it's about $750 million of market opportunity. If you add those companion chips, the SPD Hub, the temperature sensor, and the power management, we add about $600 million of TAM just because of the transition from DDR4 to DDR5. What we talked about on the earnings call is that some of these technologies in the data center are going to waterfall into the client space. The reason is that when client systems, high-speed PCs and notebooks are going to be on the market, some of the challenges we have in the data center are going to be found in that space as well. Some of the technologies that we have developed for the data center will waterfall into that space. We started to introduce product for that client space. It's nascent. It's going to grow with time.
But we estimate the TAM for these client products that we're developing today to add another $100 million of TAM. And finally, what we announced in the fourth quarter of last year was what we call MRDIMM, which is really a step forward in the ability to add capacity and bandwidth onto a data center module. And that is adding another $600 million of TAM. And the reason is that when you develop an MRDIMM as opposed to DIMM for DDR5, not only do you have a more complex RCD chip, which is going to have a higher ASP, a more complex power management chip, which is going to have a higher ASP, still two temperature sensors, one controller, but 10 more chips that we call Data Buffers that you need to perform the function of an MRDIMM.
So when you move from a DDR5 standard DIMM module to a DDR5 MRDIMM module, you actually increase the content by a factor of four, which explains the $600 million. So you add all that up, and you end up with the technologies that we develop today of a doubling of our TAM.
Really cool. You touched on a lot of interesting points here. Let me go back to the client market opportunity. You have traditionally been focused on data center, and it makes sense that you can leverage that technology and tap into the client market, and how do you see and you launched the Client Clock Driver chip, right? How do you see Rambus over time, the exposure to client versus data center? How do you see in the long run what's going to be the relevance of that market for Rambus?
Let me go back to a previous comment you made about power integrity and signal integrity.
OK.
And say a few words about that so that I can answer your question. Signal integrity is about the ability to transmit a lot of data on a parallel bus in a very noisy environment. And that's what a buffer chip does. You have a memory module in a server with a lot of memory capacity, very small signals in terms of voltage, parallel buses, so the signals are very close to each other, and a very noisy environment. So it's prone to errors. And the higher the speed of the system, the more prone it is to errors. So signal integrity is about the capability of transmitting smaller and smaller signals faster and faster in a more and more noisy environment. And that's the capability and the know-how that we have developed at Rambus since the inception of Rambus.
Actually, when I talked about 35 years ago in a patent portfolio, it was all about signal integrity. Power integrity has to do with the ability of delivering power to these modules in a very efficient way, so what you want to do is you want to inject power into these modules, but you don't want this power to be transformed into heat. You want this power to be used by those chips. You also want this power to be very, very stable and at different levels. If you take a module, different chips have different power levels, so you want to have each one of these power levels delivered in a very stable way, and finally, you want to make sure that the ramp up and ramp down of power are really well managed, so power integrity is becoming really, really important.
And that's also a capability and a know-how that we have established at Rambus. We started our power management activity three or four years ago silently. We have developed our own teams. We have introduced our first products to market last year. We were in the second wave. But we are qualified with each one of our customers on the higher-end version of these products. So power integrity and signal integrity are fundamental know-how that we have developed over time. The barriers to entry to master those technologies are pretty high. And they will become more and more important in the data center and in the client. And the reason it's becoming more important in the client space is that the client system performance keeps going up as well.
So when you reach speeds on the client space of what we estimate 6,400 MT/s or 7,200 MT/s between the processor and the memory, then you have to reconstruct these signals. That's the signal integrity concept. And you have to make sure you also deliver that power in a very efficient way. That's power integrity. So over time, you will see the equivalent of the buffer chip and the equivalent of the PMIC that we have in the data center. You will find them on the client space. You will start with the high-end client processors, as I said, at those higher speeds. But over time, because everything goes up, this will trickle down into most of the client space. So initially, as I said, the TAM is about $100 million. That's not a big TAM in the short run.
But in the long run, we believe that this is going to be used everywhere in the long run. And that's why we continue to try to be at the forefront of these technologies.
I love to hear that you're qualified with your PMIC in a number of customers. Maybe that's a question more for Des on how do you see the revenue coming from the new PMIC initiative that you have in the short term? How do you see that affecting being relevant to your P&L going forward?
Yeah. As Luc mentioned, we released our sort of PMIC solutions in April sort of last year. We've went through the qualification phases with the customers just now, and what we anticipate is really a significant revenue contribution in the second half of 2025, continuing to grow into 2026 and beyond. If you were to take the aggregate of our companion chip, we really size this as being about a $600 million market sort of opportunity for us, and we're targeting getting towards that sort of 20% market share on the companion chips, so we'll see revenue in the second half of 2025 growing into 2026 and beyond as we get towards our sort of targeted market share there.
And that's for the power management only, right? But in the quarter, in the last earnings report, you mentioned that you launched a record of eight different chips just last year. And they're in different phases of pre-qualification and pre-production. And so maybe can you touch on where they are from pre-qualification to pre-production and how much they're going to be of the revenue in the short term as well, not only power, but all the other chips that came to market?
Go ahead. Sorry.
I'll start with the first part of the question. I'll let you talk about the numbers, where they are. So we introduced in April of last year the first suite of power management chips that are JEDEC compliant. And you remember we were in the second wave of introduction. People were already having chips on the market. And the key there was to convince customers to allocate resources to look at what we had to offer. They looked at what we had to offer, and they actually realized that the high-end portion of that suite of products was showing better performance than our competitors. So all of our customers have actually allocated resources in designing those products in. And we are in qualification, to answer your question, with our three memory vendors on this Extreme, what we call the Extreme PMIC.
It's a name that has been defined by JEDEC, not by us. But the Extreme PMIC, we qualified with three customers. Going on with the PMIC power management chip, in Q4 of last year, we announced our chipset for MRDIMM. And remember, MRDIMM is a very complex DIMM. We are the first to have announced that chipset. As part of that chipset, there's a specific power management chip for MRDIMM. And we were the first one to announce that. So we continue on that path of offering the best-performing product on the high-end side so that customers actually allocate resources to qualify us in. And then we waterfall down to the other products. So that's on the power management side. So qualification for the first PMIC, for the MRDIMM PMIC, the MRDIMM market is going to start in production in the second half of 2026.
We are in the very early stage. Different stages for these two. On the client clock driver, which we announced in July, we are in qualification with all our three customers. Typically, the qualification cycles are a bit shorter. But as I said earlier, the TAM is a bit lower as well. And finally, part of these eight chips is the whole MRDIMM chipset. Remember, there's a new RCD, a new PMIC, 10 DB chips. These are being introduced to customers as we speak. And as we said on the earnings call, these are going to be in production in the second half of 2026. It ranges from the second half of this year for the first product to the second half of 2026 for the other products. That's about the timing and the different qualification stages.
Yeah. I think you summarized it. We've got the building blocks in place. Now we'll start to see the revenue contribution coming in in the second half of 2025 and continuing to grow into the model. But really pleased with our execution on the R&D side and releasing these new products to the market.
And those are sticky products. So once you're designed in, can you touch a little bit on the life cycle for those? Once you're designed in and you start production, how long does it take for the next cycle?
It's accelerating. In the DDR4 generations of product, we had to develop one new product every other year. This was only the RCD, so every other year, there was a speed upgrade from the processors or the memory, and we had to adapt to it, have a chip that goes faster. Over the last two years, this has accelerated quite a lot. We have to develop one RCD chip every year now. We actually, this year, for example, with three generations of DDR5 at different stages of production or qualification, which has never been seen in the DDR4 generation, so a massive acceleration on our development cycles, and at the same time, because there are more chips on the module when you take a DDR5 module, we also have to develop these companion chips, and that explains why we had announced eight new products last year.
This is also why it was very important for us as a company over the last few years to focus on these memory modules, because we were expecting this acceleration, and we were expecting this expansion of product offering, and we had to make sure that we had the R&D teams in place to keep up with that. Now, I think the product life cycles are going to be a bit shorter. I think we're going to be on that pace. AI has played a key role, not necessarily in volume, but certainly in the acceleration on the technology, because AI requires so much bandwidth and capacity that the industry had to move from one generation to the next much faster than what they initially expected, and we could keep up with that, which was really good.
All right. You mentioned earlier that the compute opportunities four times in the AI server versus a traditional general purpose server. That's how I read that comment of four times. But you still have a lot of opportunity on the traditional server. How do you see the replacement cycle playing there? What's the opportunity for just the traditional as well?
Yeah. So to be precise, the comment about four times is the dollar content on an MRDIMM is four times the dollar content of a DIMM.
OK.
Whether they go into an AI server or a standard server.
Or a traditional, yeah. OK.
On the server side, we expect, as most of the analysts and people we talk to expect, the server market to grow mid- to high-single-digit this year. But in that growth, we expect to continue to gain share. But the number of servers and the growth of the server market is only one vector that we look at when we look at growth. What's happening, you're mentioning about AI servers, the first thing we need to understand is that from our standpoint, we don't oppose AI servers from standard servers. And the reason is in a standard server, you have CPUs. But in an AI server, you also have CPUs. You have two CPUs with standard DIMM memory. And you have GPUs and HBM. So when the standard server market grows, we grow with them. When the AI server market grows, we grow with them.
The technology demands are a bit different, though. On an AI server, these standard CPUs, where you have standard DIMM memory, they actually prepare the data, groom the data, structure the data to feed that data to the GPUs and HBM memory, which are the number crunchers, and that's a function that requires a lot of memory with very, very high speeds, so what we typically see is that a standard CPU in an AI server requires more memory at a higher speed, so what it means is that if you compare the amount of memory that you need in a standard server compared to the amount of memory you need in an AI server, typically, there's a factor of two to six, so when people develop AI servers, they need the highest speed of DDR5. They need to use the latest version of the processors.
They need to use all the memory channels that are available there. They have to populate all of them to the fullest if they can. And they have to have the highest density of the module. And that's where the MRDIMM discussion becomes very interesting, because what MRDIMM allows you to do is on the same infrastructure, on the same architecture as a standard server, whether it's for standard functions or AI function, you can remove a standard DIMM and put in an MRDIMM. And all of a sudden, you double the capacity and you double the bandwidth. That's the magic of this technology. And that's why we love it. That's why we invest in it, because it's supported by the industry. It's defined by JEDEC. So everyone is behind it. It's a very elegant way of increasing capacity and bandwidth.
And for us vendors, it's also increased the dollar content by a factor of four, approximately. So in AI servers, you will probably see the first adoption of MRDIMM, because the requirement on capacity and bandwidth is so high compared to standard servers. So that's how we look at it. Both grow, both use standard DIMMs. But on the AI server, the demands on capacity and bandwidth are much higher. So they will probably be the first adopters of MRDIMM types of technology. I hope that was clear.
Yes, that was very good. Thank you. The data center and the AI opportunity is clear on the product side. But it's also a huge tailwind for your silicon IP business as well. Can you touch on that?
Sure. So you remember, we have patent licensing, silicon IP, and products. The silicon IP, as a reminder, is about $120 million growing 10%-15%. And that's a business where the business model is we sell licenses. So we develop IP blocks. And people buy licenses on a usage basis. And our customers are semiconductor companies. Now, back to the focus of the company, the IP we develop for the silicon IP business are in three areas. One is high-speed memory, so HBM and GDDR. The second one is high-speed interconnect, PCIe and CXL, to simplify, and security. These three pieces of IP are critical to data center and AI. And our customers are semiconductor companies. So with this business, we have the ability to see where our customers are going.
And all of them are developing solutions for the AI world, whether they're startup companies or the largest companies. So we have this whole range of customers. And what we see is they need PCIe and CXL because the architectures are becoming more heterogeneous. So all of these chips have to communicate between themselves at a very high speed. So that's the tailwind for PCIe and CXL for us. The tailwind for HBM is obvious. AI training is using HBM. We believe that AI inference, because of cost concerns in particular and because AI inference will probably move to the edge, GDDR memory interface are going to be critical. So we have GDDR, and we're leading in GDDR. So that's the second tailwind is the memory space.
And finally, in AI systems, whether they are from startup or large companies, whether they're small or big, you typically have a very heterogeneous type of architecture. So you have a lot of data moving between chips. So the ability to secure that data when they sit in a chip or the ability to secure that data when it moves from chip to chip is really, really important. And this is what's driving our growth on the security IP side. So these three elements are critical to AI. And they are the vectors of growth for us under the silicon IP business. And they give us visibility as to where the market is going for chips going forward.
It's interesting that you have a very diverse pool of customers, the startups and the large names as well. What's the competitive landscape for that business?
So for the silicon IP business, our main competitors are Cadence and Synopsys, and then some startups. But I would say this is a market, like many times in the semiconductor industry, where our competitors can also be our partners. So for example, we sold in 2023 our PHY business to Cadence. The PHY is the last element on the chip before the signals leave the chip. We are the controller, which is the next one. So interfacing the PHY and the controller is really, really critical. So there are multiple customers where we actually collaborate with our competitors who want to use the best PHY and the best controller. So Cadence and Synopsys are both competitors and partners in some instances.
Nice. What's your perspective? You talked about the CXL and PCIe. What's your perspective on CXL opportunity ramp? When do you think that's going to really take off?
So for us, the CXL opportunity has ramped or is ramping in the silicon IP business. We've introduced CXL 2.0, CXL 3, CXL 3.1. And that's part of our 10% to 15% growth in the silicon IP business. On the product business, we have developed a chip. We have a chip. It's in the hands of our customers. But we have not commercialized that chip. And the reason is that through our interactions on the silicon IP business, where we talk to all of these customers, a very wide range of customers, and a lot of them are using our CXL IP, what we realize is that a lot of these chips that are being developed by our customers are custom chips, or they're chips that only address one customer or two customers.
So the economics on a custom chip are not as attractive as the economics on a buffer chip, for example, that is defined by JEDEC that the whole industry is going to adopt, whatever the end customer is. So we're watching whether when the industry is going to line up in terms of having a chip that meets the requirements of all customers, or at least a chip that meets the requirements of a sufficient number of customers to make the economics attractive. The other thing that has happened in that market is with the delays of these chips in the CXL world, the first use case for CXL was memory expansion. It was the ability to add memory to a processor through a serial bus and a CXL protocol.
Because once you have populated all the memory bus and you cannot go further, and you need to add memory, the only option you had was to add another processor so you can have more memory, but that's not economical, so people were thinking of using a CXL bus to add memory. Now, what has happened is we think that MRDIMM is going to be a very strong contender in that space of memory expansion. Because as I said earlier, you're using the exact same architecture, the exact same infrastructure, and you just remove one DIMM, like a Gen 3 DDR5 today is at 6.4 MT/s , and you replace it by an MRDIMM, which is a standard product in the industry, and you get to 12.8 MT/s, and you have twice the capacity.
That function of memory expansion that we initially thought would be addressed by CXL is going to be much better addressed by MRDIMM, in our opinion. We're in both camps. But we believe that a solution that uses the current infrastructure, the current architecture that is supported by JEDEC, which means that all of our competitors, all of our customers, and all of our customers' customers have aligned around that architecture, is going to give a very elegant and fast solution to the question of memory expansion.
You're de-risked no matter the direction that it ends up playing.
Yeah, we have both capabilities. But we think we're going to ramp MRDIMM much faster.
Awesome. So we touched on the product business, on the silicon IP business. And I wanted to cover also the licensing business. It's been a stable foundation for Rambus for years. And I would love to hear a little bit more color on the licensees. And maybe we start with renewal of the contract with Micron.
Sure. So, because it's patent licensing business, we don't give details about every one of our contracts. But I would say that because it's foundational technology for memory interface, our licensees are either people building memory or people building chips that interface to memory. So we are licensed with the three memory vendors. And we are licensed with the vast majority of people who develop chips, processors, ASICs that have to interface to memory. So that's the structure of our business. That's what makes it $200 million-$210 million. It is very stable. It's large margin. But to my team working on it, there's a lot of work behind it. Every year, we have 200 and 210. But every year, we have to renew two or three contracts. And some are more visible than others, like the Micron contract.
So if you look at the memory space, we have renewed Samsung for 10 years, SK hynix for 10 years, and Micron for five years. Micron was always on a five-year cycle. But it's a testament to the strength of our patent portfolio and to, I would say, the confidence that people have in our ability to continue to invest in this space. So I'll say a few words about this. We still have a group called Rambus Labs at Rambus. And they are the ones continuing to develop patents in that area. That's critically important because our patent license agreements last five to seven years, 10 years for Micron, for Samsung, and SK hynix. The patent life is about 20 years. But our patent licensees, they want to see us continue to innovate.
That's really, really important for us because, as I said, the product business gives us visibility into 2026, 2027, 2028, when JEDEC goes to DDR6, 2029, 2030. So we have a visibility to 2030 and beyond on our product business. Our silicon IP business gives us visibility because we touch all semiconductor industries, and we see what chips they are developing. Our patent licensing business gives us visibility from a technology standpoint. Beyond that, we are thinking about beyond DDR6. DDR6 is going to start in 2030 in the market, five, six years. But beyond DDR6, what are the technologies that are going to be required? So patent licensing, from that standpoint, is really, really important to us. The last point I will make about patent licensing is we started as a patent licensing company.
I would say at that time, our reputation in the market was not the best. But our revenue on patent licensing is flat. And we expect it to continue to be flat at $200-$210 million. But what we've done as a company is we've taken the cash inflows from patent licensing, which is rich margins. And we have invested that cash into technology and product developments for products that matter to our licensees. So our relationship with our licensees has completely changed. There's no doubt that our largest licensees are the three memory vendors. But they have become our largest customers on the product side as well. So our licensing business has not grown. But our total business with these licensees has completely changed in terms of its size and its structure. So that strategy was really good for us because it allowed us to grow.
But it allowed us also to develop a completely better relationship with the industry where we actually contribute to their success by developing the products for what they need to do.
That's really good from just the cross collaboration across the environment with your customers, with your competitors, and how you have the visibility of seeing what's coming 10, 15 years ahead. You posted on the last earnings your record cash flow generation. And you have a very healthy balance sheet. That cash from those stable businesses helped you reinvest in R&D and strengthen your technology roadmap in product, as an example. You paid buybacks. So you return cash to shareholders. You've done M&A as well. And I think my question to Luc and to Des is, how do you think about the capital allocation going forward now that you have this very strong balance sheet?
Yeah, I think as a result of having a robust balance sheet and strong cash generation, this has enabled us to have a stable approach to capital allocation. It's really built around three pillars, organic investment, inorganic investment, and capital return. From an organic investment perspective, I think 2024 was a great example of that we touched upon earlier. We released eight chips, which significantly expanded our market opportunities there. Inorganically, this is something we continue to evaluate. We do have a healthy funnel of targets there. What we've done in the past five years is make five smaller acquisitions, which has really helped get our silicon IP business to scale, and that's something we'll continue to look at going forward from there.
And lastly, on the sort of capital return, in the last three years, we bought back about $300 million of shares, which equates to about 60% of free cash flow return to shareholders. So we do have that consistent approach from there. I would say looking ahead, I would use the same playbook. And I think we've deployed that very well over the past couple of years. And we'll continue to see the same areas of focus from us on a capital allocation perspective.
Great. I want to turn over to the audience for questions. I have a mic here and a mic there.
Hello. I do have one question. So considering the power and the signal integrity and kind of different domain in the semiconductor industry, and also IP licensing and product are also different business modes. Considering Rambus, I remember it's like 712-person employees, something, right? How, from a resource point of view, and are things for Rambus to be so high confidence, you can have like 20% of market share when you just release your first product in one year, considering there's also a lot of strong competitors in the market, like TI, Monolithic Power Systems, which has done this same product for many, many years. And they have established the organization and the infra to manage the operation and dealing with the foundry and operations. Thanks.
Yeah, thank you. That's a good question. Competition is always good. They keep us on our toes, so that's what they do to us. Remember, just as a reference, this is in our DNA. This is what Rambus had to do. We had to turn around a company from a patent licensing company with a poor reputation to a successful product company. We came late to market in DDR4. No one believed we would be in DDR4. And we get 20% of market share in a few years. And it's a small world. We have hired people from our competitors who tell us, n ow, we never believed you would be successful. And power management is another example. We started power management on our own two years ago. And we are qualified with the three memory vendors. We're going to ramp in the second half of 2025.
But one of the reasons to go maybe to the bottom of your question is doing power management is doing power management. Doing power management on a module in a very harsh environment is a different craft. And we understand that environment really, really well. Understanding how a module works in a very tight environment with very strong physical constraints, thermal constraints, is not doing power management in any other environment. And that's an environment that we know very, very well. And that makes a difference. The other thing that makes a difference is that we're going after the same ecosystem with the same customers. We understand the details of the qualification processes. We understand the company's dynamics there. We understand the procurement policies there. There's a lot of knowledge that we have developed in that ecosystem that we can reapply to the new products that we introduce.
So there's a business aspect of it. But there's a technology aspect of it, which is doing signal integrity outside of a module is not doing signal integrity in a module environment. Same for power management. That's a short answer to this.
I had another question over there, please. Thank you.
Great. Thanks. We heard a lot of discussion around the memory development, particularly when you look at commodity DRAM going from DDR4 to DDR5. So on the base rail structure, that's one thing. But the wafer technologies like HBM4, HBM4E, and HBM5, do you guys get a kicker from that as well?
We get a kicker from a silicon IP standpoint, and the main reason without going into the detail is that when you use standard DDR memory in a standard server, you need the architecture that is provided by a module with a buffered chip. That's what's best from a technology standpoint for the industry. When you go HBM, HBM is used for parallel processing with GPUs, and in an HBM architecture, you do not have the equivalent of a buffered chip. The equivalent of a buffered chip in an HBM system is actually the HBM IP that sits in the ASIC. So it's not a product that sits on the module. It's an HBM IP that sits in the ASIC. You have the ASIC. You have the HBM stack, and at the bottom of the ASICs, you have an HBM controller and HBM PHY.
So that's why our HBM business is not a chip business because such a thing does not exist apart from the memory. But it's an IP business. That's the difference there. So we have a kick from the IP business standpoint.
For the PMIC?
Processing memory, it's a niche opportunity that we look at. The idea here is when you have a memory module, can you bring some processing capabilities as close as possible to the memory so that you do some sort of pre-processing? We have some of these discussions mostly with our silicon IP business at this point in time. That's a very small portion of our activities at this point in time. Thank you.
There's one here in the front.
My understanding is the MRDIMM, as you said, is a very elegant solution in part because it sort of came later, and it's sort of a retrofit, if you will, for DDR5. Do you expect the opportunity to change for MRDIMM as we move to DDR6 in the future?
So we see MRDIMM as a bridge between DDR5 and DDR6. If you look at the generations of DDR5, every year we increase the speeds of DDR5. Gen 3 is at 6.4 now. Gen 4, 7.2. DDR6 is not defined yet. It's being defined by JEDEC. The architectures are going to be very different. But how do you get beyond 7.2 or 8? The only way is to have MRDIMM because you jump to 12.8 just because of the architecture. So it's going to be a bridge between the current generations of DDR5 and the first generation of DDR6. And that's why it's going to address the higher end in the first place.
I see there was one last question. And that will be our last one here.
Thank you. For the issue with the scaling of DRAM to handle what's supposed to be coming, there are other technologies that people are working on, be it high bandwidth flash or 3D ferroelectric RAM. Are these technologies that you could take advantage of if they were to eventually kind of feed into the HBM opportunity?
Yeah. I'll try to make it short. This is a very good question. We believe that whatever the application, one of the fundamental challenges of the industry is that processing technology moves faster than memory technology. So there's always an opportunity to improve the memory subsystem architecture. That's what's driving DDR5 to DDR6. This is what's driving HBM3 to HBM4. But it can drive all the architectures, novel architectures. And the way we address it is, remember, we have our Rambus Labs group that define our patent portfolio. They are the ones who look at all of these other possible architectures for memory subsystems. So in constant, I would say, monitoring of what other technologies that can be implemented there, like using novel packaging technologies, for example, novel stacking technologies, novel power management technologies. So we have a Rambus Labs that actually does this in collaboration with the industry.
And when we see something that has the potential to grow, we actually invest in it. That's what we are. We have the capability of taking stakes in private companies. We look at this in a very selective way. But that's something we can do. Yeah.
All right. Thanks, Luc. Thank you, Des.
Thank you.
Thank you, everyone.