Some of my favorite management, among my favorite management teams, and really enjoy talking to these guys and, related to that, I really enjoy having them up here on stage. Frankly, I don't need a Q&A outline here. I could probably just ad lib, but for the sake of everybody in the audience, both here in the room and online, Luc, maybe if you can give us an overview, set a foundation for understanding the Rambus story.
Yes, thanks, Gary. So Rambus started about 30 years ago by defining the fundamental technology that is used in memory interface solutions. So either on the memory side or on the processor side, that interfaces with memory. The business started as a pure patent licensing business, and over time, we've transformed the company into more of a technology company. So that today, our main approach to market is actually through semiconductor products. So if you look at Rambus today, after all these years, we have three ways we go to market with. We continue to have a patent licensing business, which is stable, a strong cash generator. It's about $200-$220 million a year business, with, you know, long-term visibility, high-margin business.
The second way we go to market today is through our Silicon IP activity. The Silicon IP activity is an activity where we develop pieces of IP that we sell to semiconductor companies, and these semiconductor companies integrate these pieces of IP into their semiconductor products. So typically, they build SoCs, and they need some high performance interfaces or high-performance security solutions, so they buy those from us and integrate those into their semiconductor products. That business, the Silicon IP business, is about $105 million-$110 million a year, and that's a business that can grow 10%-15% a year on average. The third leg of our business is really semiconductor products. So we are a fabless semiconductor company.
Like many other fabless semiconductor companies, we build interface chips that go into data centers and that fit between processors in the data center and memory modules in the data center. That business has shown phenomenal growth over the past few years. When we started, in 2018, that business was about $39 million a year. Last year and this year, it's going to be about $225 million. So this is the high-growth engine of our business today.
Okay. Yep, and, I can vouch that that is, that last segment, the chipset business, memory interface chip business, is what most investors, you know, are interested in for the Rambus story, and that's probably where we should start out. Maybe if you can give us a sense of, the types of, chips that you develop, the markets that, are targeted for these chips, and what drives the need for the chips?
Okay. So the chips we develop, you know, I said earlier, they sit between processors and memory in the data center. So in every data center, you have processors, and these processors need to access data that is in memory, and these memories are in the form of memory modules. The three memory vendors are SK Hynix, Samsung, and Micron. They make those modules, and we sell our chips to those memory vendors. And the first chip that a processor sees in a data center when that processor talks to the memory module is our chip. That's an interface chip. And the purpose of this chip, to your second question, is to really clean the signals between the processors and the memory. Because in data centers, people want to have access to data faster, and they want to have access to more data.
One of the challenges that this causes is that you have a lot of signals traveling at very high speed on that interface, and that creates a lot of noise, and it creates a lot of crosstalk. These chips actually reconstruct the signals. They clean the signals so that there's no mistakes in transferring data between the processors and the memory chips and the memory modules. These are very complex chips to make, and you know, the value that we add is what we call signal integrity. That this is the ability to have those signals travel very, very fast between the processor and the memory.
Okay. It's a constant treadmill of innovation, and starting at the processor level, it's all about higher processor cores per, you know, given die area, and that requires more and faster access to memory. And that's essentially the problem that you're solving that drives new generations of the DDR standard by the JEDEC standard-setting body. Maybe you can give us an appreciation of the current product generation or technology cycle that's underpinning the server DRAM market.
Yeah. So to start with, as you correctly said, it's all about, you know, us wanting more data faster. You know, we see this in our everyday life. We want more data. There's pent-up demand for data, and we want to have access to that data very, very fast. The processors, typically from Intel and AMD and now from others, you know, have a memory bus that goes faster and faster. And as every time they introduce a new processor, that interface on the memory side is faster, we have to develop a faster chip. So that's what's driving the generation of products. These products are defined by a standardization body called JEDEC. JEDEC defines those interfaces, they define those products, and we have to comply to those specifications made by JEDEC.
The industry has been on a generation of products that we call DDR4 for many, many years, for the last 6-7 years. In the DDR4 generation of products, there were several sub-generations that were going faster and faster. Last year and this year, the industry is moving to a new generation of memory interface called DDR5. Surprise, surprise. That means that the processors from Intel, AMD, and others, you know, have to have that interface integrated into their processors, and the memory vendors have to have DDR5 memories, and we have to have these DDR5 buffer chip. The DDR5 generation of products offers higher speed and higher capacity. That's what people want. The whole industry is transitioning from DDR4 to DDR5. There are a few differences between the DDR4 generations and the DDR5 generations.
In the DDR5 generations, the rollout of sub-generations, where you have higher and higher speeds, is faster. You know, in the DDR4 times, every other year, there was a new generation of products. In the DDR5 times, it's every year, so we have increased the pace by a factor of two. The second thing that has happened with this standardization body is that the signals travel so fast and there's so much density of that data, that some of the functions that were sitting in the motherboard of the servers in the DDR4 generations, have moved to the memory modules now. And this is what, you know, temperature sensors, SPD hubs, and power management chips. We call them companion chips. If you take a DDR4 module, you didn't have companion chips, you just had an interface chip.
If you go to the DDR5 generations, which is introduced today in the industry, you still have this interface chip, but you also have, you know, companion chips sitting on the same module. And that offers, you know, a new opportunity for some expansion for us, because we are developing those companion chips as well.
Okay. You spoke earlier about how since you got into this market, your revenue has essentially increased sixfold, roughly. And most... You know, a lot of that is just growth in the market, but most of it has been a credit to your market share gains. And so I, if I'm not mistaken, your market share should further improve in DDR5, just given your qualification footprint. So maybe if you can paint a picture for us in terms of the influence on your growth from market share gains as we move from DDR4 to five, for the main chip you sell.
Yeah. So when we, you know, as I said, in 2018, we literally had 0% share, you know, with $39 million. You know, this year, in 2023, you know, with the guidance that we gave for Q4, our revenue is gonna be around $225 million, and that represents a little north of 30% share. Yeah, and when last year, we were a little north of 25% share. So most of our growth was actually gaining share or growing faster than the market. And the way to grow faster than the market is coming from a couple of things. One is, if you follow the history of the company, we have refocused the company on that business only.
We actually eliminated, over time, a lot of activities that were not central to that mission of, you know, making that interface between the processors and the memory. That focus helped us, you know, when our competitors were less focused during that period of time. The second reason is, we have a very strong technical and commercial relationship with the ecosystem that allows us to have design wins early, and the design win footprint is an early signal of market share. We always try to make sure that every time there's a new generation, sub-generation of product, we are ahead in our development schedule. We engage with the ecosystem very early, because the qualification and validation of those chips in that ecosystem take time and is very complex. This is a very close ecosystem with very powerful companies.
You know, we name them. You have Intel, AMD, and then, you know, you have newcomers like Amazon, NVIDIA, Microsoft, and others on the processor side. And on the memory side, there are only three vendors there. So the relationship, the technical relationship, the validation process with those companies is extremely important. It's very important that we have a footprint early, and that's what we're doing with DDR5. So today, as mentioned earlier, we have a market share on a blended basis between DDR4 and DDR5 of about slightly north of 30%. You know, our objectives as the market is moving completely to DDR5 in a few years from now is to get to 40%-50% share-
Okay
... on the interface chip.
Okay. So focusing again on the interface chip, there's a couple different undercurrents taking place in the market. You have maybe some inventory digestion on the DDR4 side, but DDR5 industry adoption is still early days, so you've got the sequential ramp there, you know, helping offset that softness on the DDR4 side of the market. This morning, Micron positively pre-announced results, and so clearly pricing is firming in the memory market. And when pricing is firming and rising in the memory market, as a buyer, you don't want to get caught with not enough inventory... And so do you read Micron's positive pre-announcement as a signal that we could be nearing the end of the DDR4 inventory depletion?
So yes, this year, and we said it over and over, it's been a very lumpy year in terms of that transition between DDR4 and DDR5. Last year, we were struggling with supply restrictions, and, you know, we had to fight to get parts for our customers. Our customers have built inventory on DDR4, and this year, what's happening is that we see an inventory digestion, you know, happening in the market. And that inventory digestion on DDR4 continues to be the case. The vast majority of our revenue in Q2 was DDR5. That's true for Q3. That's also true in our guidance for Q4. So the DDR4 inventory digestion has been quite deep.
However, we see parts moving down the value chain, you know, and we expect the DDR4 inventory situation to normalize in the first part of 2024. At the same time, the ramp of DDR5 was having what we call teething problems, you know, growth problems. There was a new generation of processors, a new generation of memory, a new generation of interface chips, the addition of companion chips. All of these things had to ramp at the same time in that ecosystem, and the level of complexity was probably underestimated. So this year, what we've seen is inventory digestion on DDR4 and a slower than expected ramp of DDR5.
As a result, we believe the market, this year compared to last year, went down by 10%-15%, but we maintained our revenue flat, which explains why we continued to gain share. But to your question, you know, we believe the DDR4 inventory correction is going to be completed, you know, in the first part of 2024, and we're going to be back to normal levels of, you know, POs and shipping on DDR4.
Just to recap, sounds like you've got lots of near-term tailwinds. You've got market share gain opportunities, you've got market growth opportunities, you've got average selling price tailwinds, you've got more content, as it relates to, to DDR5 Gen 1. So let's move the discussion to how you further expand your addressable market opportunity and starting with some of the upcoming processor refreshes from the two main big general purpose compute players. We have Emerald Rapids, which I believe will be announced in a few weeks from Intel. We have AMD's Turin or Gen 5 EPYC, on the horizon. Maybe if you can explain to us how that, creates an opportunity for you to intercept opportunities in Gen two of DDR5.
Yeah. So every time either Intel or AMD introduces a new processor to the market, typically the speed on the memory interfaces increases. So every time there's a new generation out there, we have to design a new generation of buffer chip at that higher speed. So as we explained earlier, Gen 2 with Emerald Rapids is an opportunity for us. We have designed our Gen 2 product, you know, some time ago because it takes time to go through the ecosystem for validation, and that's going to give us another opportunity to gain share. Again, I think what's driving this market is more data faster. So every time there's a new processor, it's faster. The way Intel and AMD address the question of: How can we get access to more data?
is really by putting more memory channels on each one of their processors. Today, AMD has 12 memory channels in the current generation of products. Intel has eight memory channels for each, for the current generation of products. Emerald Rapids is still gonna be eight, but their follow-up generations, Granite Rapids, is gonna be 12. And 12 channels means that potentially you have 12 memory modules, you know, and if you put two memory modules per channels, potentially 24 memory modules, and each memory module uses one buffer chip, one interface chip. So that's how we look at, you know, the market growth, every time they introduce a new product. But I would stress the fact that when they introduce a new product, we have to design a new interface chip.
Okay. Most of our discussion so far is focused on the server portion of the market, but there will eventually be a need for higher signal integrity in client compute applications and hence, an opportunity on the client clock driver. Maybe you can give us an update on where you stand on those R&D efforts to tap into the PC market.
Yes. So, the data center has very high, you know, requirements in terms of speed and signal integrity. You know, over time, we're going to see the same challenges appearing in on the client side, especially on the high-end side of the client side. You know, today, if you look at a client solution, you don't have the equivalent of a buffer chip. You have a processor that interfaces directly to a memory module without an interface chip. But as the speeds increase on the client side, we believe the same challenges will need to be addressed, especially the signal integrity. So that will create a need for the equivalent of a buffer chip, which we call the Client Clock Driver.
The role of this client clock driver is gonna be the same as a buffer chip in the data center, which is to clean the signals so that the communication between the processors and the memory on the client side works as well. So it calls on the same skills, it calls on the same competence, so that's something we are investing in. We believe these very high speed client solutions will hit the market, probably in 2025, if we look at the roadmaps from our friends on the processor side, but we have started the development as we speak.
Okay. It does in the discussion.
Yes.
Talk about pricing for these different types of chips, RCDs, the companion chips. Maybe you can give us a sense of, now that we're seeing mass market adoption of DDR5, what the trends are and the RCD selling price and related, you know, what you feel is a sustainable gross margin for these, these chipsets.
Yeah, it's a great question, Gary. I think from a pricing perspective, really, DDR4 has really hit the floor from a pricing perspective. This has been a product that's been in the market for 7+ years just now. As we move to DDR5, what you get to see is a, it's a generational change, and it's a complete ASP reset there, which is obviously beneficial for us. So what we've seen on the DDR5 cycle has been in the first half of the year, when we were shipping lower volumes of the products, the ASPs carried a heavier sort of premium there.
We're seeing a more normalized ASP going into the second half of the year under these Gen One products, and that's what we expect now on Gen One: stabilization and entering these normal pricing cycles going forward from there. As a company, we have a, you know, long-term target of gross margins on the product side of 60%-65%. And if you look last year, 2022, we were around 61% there. This year, I think we'll be somewhere between 62%-63% on the gross margin side on the product, which is very healthy for a chip business from there. So we're very pleased with our gross margin performance.
We'll continue to be disciplined in our ASP approach, and we'll continue to drive the cost savings so that we manage towards the long-term gross margin target of 60%-65%.
Okay. At a time when everybody wants to talk about AI, general purpose compute is almost viewed as a negative, a negative terminology, right? Everybody wants exposure from an investment perspective to, you know, GPU acceleration for AI workloads. Maybe if you can explain to us how you may be affected positively or negatively from this CapEx shift from perhaps general purpose compute to graphics compute.
So the short answer is that, you know, this push for AI is good for us. It's all about getting, having, you know, more memory faster. But if we double-click on that, AI workloads on the training side, they need specialized processors, GPUs in particular. And because the algorithms are highly parallelized, they access a different DRAM memory architecture, HBM memory. And, you know, our play here, you remember we have three legs to our business: patent licensing, silicon IP, and silicon products. The second leg on the silicon IP, you know, we are leaders in developing HBM controllers.
So anyone who, you know, wants to develop GPUs or DPUs or specialized processors that access HBM memory has the capability of buying that interface that controller interface, you know, to us. When you access HBM memory, you don't need an interface chip. You know, the GPU accesses the memory directly, but the GPU uses an HBM controller, and we make these HBM controllers. That's one play we have in AI. Then, if you look at an AI box, you have a rank of GPUs accessing HBM memory. But what you also have in an AI box is standard servers with standard, you know, memory modules using standard buffer chips.
Actually, if you look at the amount of memory, these standard processors with a standard memory, you have more of that standard memory than you have of HBM memory in terms of capacity. And the role of these standard processors in that AI box is pre-processing. It prepares the data before it goes into these, you know, GPUs for, you know, processing. So, you know, from the standpoint, typically they use high capacities with high capacity. So from that standpoint, the AI boxes are an additional way for us to expand the TAM. Now, what has happened this year on the CapEx side is, the CapEx, you know, going to AI was quite high, and that CapEx was, you know, moved away from other, you know, solutions that, you know, people were considering.
But that has not affected us negatively. As we said earlier, you know, this year we continue to gain share, you know, with our buffer chip business despite this AI trend.
Okay. So, so your chips or memory interface, if I'm not mistaken, are primarily levered to what's called RDIMMs, SO-DIMMs, and NVDIMMs. Maybe if you can, you know, share with us from a chipset perspective, how you're trying to grow your, your SAM with respect to CXL, maybe the different versions of CXL, and then as well, trying to capture some opportunities out of, you know, traditional DIMMs.
Yeah. So let's separate the client space from the data center space. On the data center space, the way we expand our SAM is, number one, by continuing to design those interface chips. As we said earlier, the pace of introduction of new generations of chips is faster in the DDR5 generation than it was in DDR4. So every year, approximately, we have to generate a new product for this market. So that's the first way to expand the SAM. The second way to expand the SAM in the data center is through the companion chips. You know, when the industry moves from DDR4 to DDR5, we have four companion chips, three types, but four chips on the module, and we are developing those chips.
The SAM for the companion chips, we estimate to be in total about $600 million a year. That adds up to the, you know, buffer chip or interface chip SAM, which is estimated at $800 million-$850 million. So that's on the data center. Then after that, because the industry still is hungry for more capacity, one way to add more capacity in the data center is by using a CXL bus, a serial interface, as opposed to the memory parallel interface. We believe that the chips for memory expansion solutions in the data center are going to hit the market probably in the 25 time frame, like 26.
We estimate the TAM for these CXL interface chips to be in the range of $600 million a year. So when we look out, you know, we do see these generations of products or these generations of architectures that build up the SAM for the data center. Now, if we move to the client space, as we said earlier, we believe that some of the challenges we found in the data center are going to appear in the client space on the high-end side. And that will lead to products hitting the market in the 2025-2026 time frame with a SAM of approximately $300 million a year as well, so.
Okay. We spent a disproportionate amount of time talking about the chipset business, but it is very key to the, to the growth of the company. Now, I wanted to shift gears to a couple questions on the patent licensing side. So you have Micron coming up for renewal next year, and maybe if you give us a sense of, how you see that playing out, and, and then related to that, when the Samsung and Hynix new agreements kick in, the, the conversions between GAAP and non-GAAP revenue.
So I'll let Des answer on the GAAP and non-GAAP. I'll give a quick answer on the renewals. Because we hold the fundamental technology of memory interface, the memory vendors have a patent license agreement with us. That's the case with SK Hynix, Micron, and Samsung. But everyone building a processor that sits on the other side of the interface also have a patent license agreement with us. And we're not public about it for obvious reasons, but every year, we renew two or three patent license agreements with the industry, and that works really, really well. Typically, a patent license agreement lasts five years. The life of a patent is twenty years, so a lot of patents that are relevant in one contract continue to be relevant in the follow-on contract.
In the case of SK Hynix and Samsung, we announced over the last 12 months a renewal with them for 10 years, which is long. Before we move to the financial aspect of that, it's really, really important. If you looked at Rambus 10 years ago, we were a pure patent licensing company, and our relationship with the ecosystem was not ideal because, you know, people were paying us money, but they didn't see necessarily the direct benefit of that. What we've done as a company is that we've invested that money into products that are relevant to their roadmaps, and now our largest licensees have become our largest customers as well. So we have a much more much better relationship with the ecosystem.
The very fact that SK Hynix and Samsung have agreed to extend for another 10 years is a testament to the confidence they have in our ability to continue to build products for them. Now, in terms of, you know, how we account for that, Des, maybe you want to comment?
Yeah. Thanks, Luc. As these renewals have came up, what we've done is structure them in a way that they can become revenue recognition friendly. And what you will... If you look at our Q3 results, there was about a $30 million difference between licensing, billings, and royalty revenue from there. Going into Q4, Samsung will become revenue recognition friendly, so this quarter, so that will take the difference down to around about $15 million. And then the next inflection point that we'll have is Q3 of 2024, when SK Hynix becomes revenue recognition friendly at that point. And really, then, we'll just have a $2 million difference between licensing, billings, and royalty revenues. So the convergence of the financials will really take place in Q3 of 2024 from there.
So it's a day we're very looking forward to, Gary, from there.
Okay. So one of the common questions that I get is: How are you able to continuously renew these licensees, these patent licensees? Is it your contribution to the JEDEC standard set that keeps you relevant from a patent pool perspective and from a patent licensing perspective? Maybe give us an appreciation of what keeps that patent pool relevant.
This is how we started. You know, this fundamental interface technologies is where the company started. That's really the core of our competence. We've kept a group of researchers that only work on thinking about how these technologies are going to evolve in the future. You know, what's gonna be required for DDR6? What's gonna be required for GDDR7? And we continue to file patents in that space, so we kept that group. It's important to our customers because they know that we continue to invest in pure R&D, pure research, that will contribute to the industry. And every time we go for a renewal, as indicated earlier, because of the lifetime of a patent being 20 years, a lot of the patents, they're relevant.
But we also renew and enrich our patent portfolio on a regular basis, and that's what's creating our ability to continue to renew on a regular basis.
Okay. The last few minutes we have remaining, I was hoping you can address an overcapitalization situation, right? Too flush with cash. First world problem. But, you know, maybe you can give us your priorities in how you plan to, you know, utilize that cash.
Yeah, I think as we've talked today, we've a diverse financial model that continues to generate strong cash on a quarterly basis. This has enabled us to have a consistent capital allocation approach across the years, Gary, which is really founded on three fundamentals: organically funding the high growth opportunities ahead of us. I think Luc done a great job of outlining them earlier today. Inorganically, we continue to look at opportunities ahead of us. We have been acquisitive. I think we've made 5 acquisitions over the last 4 years, which has really got our silicon IP business to scale to the $105 million-$110 million that Luc talked about earlier.
But as a company, we do have strong firepower, as we continue to look at larger opportunities ahead of us, maybe on the chip side. But the timing of M&A is obviously very difficult for us to predict from there, but it's something that we'll continue to look at going forward. But we'll continue to be disciplined. It needs to strategically make sense, operationally and financially from there. The last pillar of our capital allocation has really been the return to our shareholders. We have a commitment of 40%-50% free cash flow back to shareholders, and if you look over the last three years, we have returned about $300 million back to shareholders, which is above the high end of the commitment from there. And that's something we'll continue to do from there.
But again, we'll continue looking at the... I think it'll be a similar playbook, going forward. And the timing of these will obviously move around, depending upon opportunities from an M&A perspective.
Okay. You literally cut it down to our last five seconds. So with that, Des and, Luc, I appreciate the time today, and I appreciate all the people here in the audience today and those who joined us online. So again, thank you.