We're pleased to have with us today Matt Jones, Vice President of Strategic Marketing, and Desmond Lynch, Senior Vice President and Chief Financial Officer. And with that, let's get started. Gentlemen, good morning. I think you have a few slides for us, first.
Good morning, Tristan, and thank you for hosting us today at the conference. It's a pleasure for Matt and I to be here today. Maybe I can move the slides. Just before we begin, I'd just like to mention about our safe harbor statement about any forward-looking statements. I would encourage everyone please to read our documents on file with the SEC, which contain a lot more information on the company than we will cover in this short presentation today. Rambus has been a pioneer within the semiconductor industry for the last 30-plus years. The company was founded based upon foundational IP associated with memory interface technology, which can be found in all of today's modern compute systems.
The backbone of the company really has been its patent licensing program, which has generated millions of dollars of free cash flow each year, which has enabled us to invest both organically and inorganically into product programs. Next slide. This slide is a great representation of how we go to market with our solutions as a company. There are three ways that we go to market. Firstly, is our patent licensing program. Secondly, is our silicon IP business, and lastly, is on the chip business. Starting off with the patent licensing program, this is the foundational IP associated with memory interface technology, and we have a robust patent portfolio of almost 3,000 patents, which generates around $200 million-$220 million of revenue each year.
It's been very stable at the midpoint of this range, given the long-term patent agreements that are in place here. Our top licensees on the patent side are the top DRAM companies, and we also see a variety of memory, SoC, and FPGA providers as well. Moving on to our next area, and this is the silicon IP business. In the silicon IP business, we have a very focused portfolio, which is centered around security IP as well as interface IP technology. What we've seen here is really we license building blocks of IP to semiconductor companies who integrate this into larger ASIC and SoC solutions. The business is operating at scale today, about $110 million last year on the revenue, and with the expectation of growing 10%-15%.
On this business, we're very pleased with the customer diversification that this offers to us, which ranges from start-up companies all the way through large, leading, well-known semiconductor companies. Moving on to our chip business, which is the final area for us. This is where we sell buffer chips to the DRAM companies who integrate our chip onto the module. The buffer chip is a small but complicated chip, and it actually provides. It sits between the memory and the processor and really controls the speed commands between both. Our growth on the chip side really has been exceptional, as we've grown from $38 million in 2018 to $225 million last year. We're really excited about the future growth opportunities on the DDR5 cycle, which is in its infancy just now.
I'll now hand it over to Matt, who can talk a little more about some of our product solutions in a little more detail. Matt?
Yeah, thanks, Des. To put into context what Des just talked about, the ways we go to market, taking a very simple example here in the data center. In the center, we talk about a server motherboard, and this could be a traditional server, or specifically, we talk about an AI server, in this case. But on the left-hand side, you see one of the modules that plugs into a number of slots for direct attached DRAM.
This is often called a referred to as a registered DIMM or RDIMM in the industry, and you see the content of Rambus technology on those chips, the RCD chip that Des talked about, and the emergence of new companion chips, which I'm sure we'll be touching on here with Tristan this morning. The SPD hub, the temperature sensor, and the power management IC that have moved on to each of the modules in the DDR5 generation that we're just embarking on.
If we expand the reach, and we look in reverse at that, second way we go to market, as Des talked about with our silicon IP, in an accelerator card and specifically, an accelerator ASIC or GPU, you see the role that our silicon IP blocks play here. So PCI Express, or CXL for serial connectivity, between an accelerator card or accelerator chip, and the rest of the system, and the HBM or GDDR memory interface for that high-performance memory that we see more and more coming with, the emergence of, accelerated workloads, AI, specifically.
And importantly here for these new silicon accelerators, as well as the data that's moving between the accelerator and the CPU, we see an increased need for security in these new system architectures. Rambus provides both hardware root of trust to secure the chips, as well as security solutions for data in motion, MACsec and IPsec specifically, as well as a variety of other products. So just a quick frame here to give you some context as Tristan asks us some questions.
Great, thank you. Maybe the first question is high level. You've been around for 30 years, and your chip effort is kind of new relative to the history. Could you explain how you became a chip supplier and how you've achieved your current leading position?
Yeah, that's a great question, Tristan. We really got into the chip side of the business at the request of one of the top DRAM companies, who thought, given our rich heritage within the memory interface side on the patent side, that we could become a trusted supplier into the market. And really, what we've done is we did acquire the assets from Inphi in 2016, which really propelled our own internal development efforts, and really our execution since then has been really exceptional. What we've been able to do on the DDR4 cycle is prove ourself to be that trusted supplier by producing quality and reliable products to the market. We were able to grow on the DDR4 side from almost 0% market share to mid-20% market share in 2022, so we saw some really nice growth there.
As we were late to the DDR4 cycle, we invested early into the DDR5 cycle, and as a consequence, we have a leadership position on the DDR5 cycle, where we're targeting getting towards 40%-50% market share on the buffer chip alone from there. And last year was really the first full year of DDR5 production shipments, and we were pleased to get to approximately 40% market share, which is in line with our long-term stated goal within the market there. So we're very pleased with our execution and performance on the chip side of the business.
Great. How large is the memory interface market? What are the growth drivers? And, and maybe additionally, if you could give us a sense of the adoption rate as a percentage of total DRAM that DDR5 is today and where that's going over the next couple of years.
Yeah, that, that's a great question. I would size the buffer chip market last year being around $700 million from there. This was down double digits compared to 2022's market size, really driven by inventory digestion challenges, as well as the AI share of wallet in a fixed CapEx environment. This year, 2024, we do see the market returning back to a growth rate, which we expect to be in the mid to upper single digits sort of growth rate from there. And in the longer term, we do see the market growing double digits, would be our expectation on the market size.
Really, the DDR5 cycle has really been propelled out of increased demand for capacity and bandwidth growth, and we see that with the higher module sort of speeds being introduced to the market just now. And in addition, we see additional channels on the DDR5 modules compared to DDR4. DDR4 was probably 6 channels, and on the DDR5 cycle today, we're seeing up to sort of 12 channels. And on fully populated systems, what we're seeing is 2 DIMMs per channel, and each DIMM does require a buffer chip, so you see that chip opportunity for us. What I would say in terms of the DDR5 adoption is that, certainly Rambus as a company, given the inventory digestion challenges, really crossed over from a predominance perspective in the first half of last year.
I think the industry really this year is, in the first half of 2024, from a bit perspective, has seen the crossover to DDR5, and that will continue to grow going forward as we see the DDR5 systems really becoming the predominance from there.
Great, Rambus has been investing in companion chips, including the SPD hub, temperature sensor, power management chips, and that's within the DDR5 product cycle. If you can provide more color on these chip offering, the size of the market and the expectation of revenue and growth as well.
Yeah, that's a great question. The DDR5 cycle moved these chips to the module itself. Under the DDR4 cycle, they were on the motherboard. So this really offers an adjacent chip opportunity next to our buffer chip. We call this the companion chips, and the three chips are the SPD hub, which is a communications chip, the temperature sensor, and also the power management device. What we've had is a very strategic rollout and approach to the DDR5 cycle, which was, number one, let's make sure that we win on the highest value chip, which is the buffer chip, and gain the market share that I talked about earlier from there. We did release our SPD hub and temperature sensor solutions, and they are shipping in low volume today.
Just last month, we did announce our power management solution, which completes our chip set. This took us a bit longer to get to the market, as we had to build that capability and skill set in-house, as that's not a traditional skill set that Rambus has historically been known for. In terms of the companion chip market, we really see this being about a $600 million market size, where we're targeting getting towards 20% market share. And really, from a revenue contribution perspective, we will see revenue in the back half of this year growing into 2025 and beyond. We're very excited about the additional chip opportunity and growth opportunities that the companion chip offers to us.
So if I look at the incremental ASP, how do I quantify this relative to your existing content, beyond the growth opportunity, what's the content moving from to, as a result of those companion chips?
Yeah, it's a great question. What I've said is that the temperature sensor and SPD hub add a couple of dollar content from there, and I would also say that the power management chip would also add another couple of dollars. So in addition to the share gains that we've talked about on the DDR5 buffer chip, you see a nice content increase from an ASP perspective here, and we're very pleased about the opportunity ahead of us.
You mentioned the market share that you've achieved and targets. What's the competitive landscape and all in the memory interface market, and also how do you feel your position may performance-wise, what is differentiating Rambus versus other companies in this space?
Yeah, that's a great question. From a competitive perspective, within the buffer chip space, we're in a small ecosystem of three customers, three suppliers, and our competition in this space is Montage, who's the Chinese supplier, and as well as Renesas, who's the Japanese conglomerate who acquired the IDT assets. Rambus being the last sort of U.S.-based supplier into this market, we view as a strategic advantage, going forward from here. And we've executed very well against both competitors and gained the market share, both on DDR4 and DDR5, and we feel as if we're very well positioned within the market here. In terms of the companion chips, we will see some additional competitors in addition to the traditional competitors of Renesas and Montage, especially on the power management.
You will see, Monolithic Power and TI, being there, but we feel we're very well positioned, to continue to grow and take market share within the market here.
Great. Are you seeing also interest not just on the basis of performance, but what I would call a geo-dependable supply, given the environment today? Is that a factor as well relative to some of your Asia-based competition?
We've certainly heard from a lot of our customers sort of indicating about the security of the U.S. supply chain and being that sort of preferred supplier. There's been no restrictions against any of the suppliers, but again, I think we're all aware of the geopolitical sort of tensions. I think long term, us being the last U.S.-based supplier into this market is the strategic advantage that I mentioned earlier.
Okay. So the next question is AI. So how do you benefit over time? Obviously, the demand for DDR5 is driven by higher performance, more complexity. I think that you also have some technologies that could relate, or IP that could relate to HBM technologies or memory technologies in AI. So if you could talk about how that over time can help, what is the opportunity, and any way you want to quantify it in terms of TAM or growth potential?
Yeah, Tristan, that's a great question. You know, certainly when we look at the chip business as it plays into DDR5 memory modules today and in the near future, you know, what we find is the infrastructure that supports AI, that last mile of training of large language models today and later inference, we see a very rich content in AI servers. Generally speaking, you know, the current generation ranges from 2-4 TB of system DDR, which is about 2-4x the average server if you look at some of the industry reports that follow that.
So that gives us an increased opportunity as we see the growth of AI servers off of a small base, but at a very high growth rate. So very exciting in that area for our core chip business. Secondly, as I touched on in the pre-remarks, you know, our silicon IP plays into key technology and chips that help you know the performance of accelerated workloads. So those leading-edge interconnect IP blocks for PCI Express with PCI Express 6 being kind of the state-of-the-art in IP and leading next-generation chips. CXL is a follow-on interconnect, where you add coherence to PCIe for different functionality.
And then the key problem that we solve in DDR5, the interface with our RCD chip between the compute complex and memory, we solve that same problem with a piece of silicon IP, a controller, in both HBM, which is the predominant memory choice of AI training, and GDDR, which will later you'll see come out as more used as we move into AI inference in the industry. So across both silicon IP and our chip business, amplified opportunities for us in AI.
Great. And then, as a follow-up question, if you could talk about your expectation about the evolution of memory architecture related to AI system, and how do you plan on playing in those new architectures?
Yeah, certainly, you know, DDR5, and a step up in both, capacity, and bandwidth for, general purpose compute, as well as accelerated workloads such as generative AI, you know, really is a step forward, for the industry. But that, you know, that growing demand, for memory capacity and bandwidth is insatiable. You see, the, AI servers moving from, you know, 2 terabytes to 4 terabytes in a single generation, as I talk about. I think there's been recent announcements, at Computex that's gonna push that, even forward here, you know, about a year from now. So insatiable growth there. And what we and, and others, in the industry, continue to work on is novel ways to, come about that.
So you'll see, going forward, talk about technologies like multi-rank DIMM, where we take advantage of today's memory technology to provide more capacity, and bandwidth to systems, going forward. So we're very excited about, how we can apply, our heritage, in the memory industry to novel concepts, MRDIMM being one of the examples that I think you'll see going forward, Tristan.
Great. And then on your last earnings call, you talked about opportunities emerging in the client market, if you could also expand around this?
Yeah. Another vector beyond, you know, capacity and bandwidth as we see the limits stretched with accelerated workloads in the data center is more and more performance in the client. Certainly, you know, AI PCs becoming more talked about these days. But in general, the performance growing in the client side. And as we see the DDR5 utilization continue to grow in the client and the speeds reach a certain threshold, similar clock management requirements and signal integrity requirements trickle down from, you know, the data center into the client as we hit those speeds. So we're very excited about those thresholds being reached later this year in the client side and seeing some of the clocking technology trickle down into the client side.
Additionally, our power management products that we've recently announced, as you touched on, you know, we see numerous applications for those across memory subsystems, and that is something that will come in the future there as well.
Great. I figure I would give the audience opportunity for questions; otherwise, I have a full list. Okay, great. So, looking to, or moving to Silicon IP, if you could outline your strategy in Silicon IP and provide some, some color behind the IP that you sell, how you've developed it over time, and your customer base there.
Yeah. In Silicon IP, we have a very focused portfolio, which is really centered around security IP and interface IP. On this business, the business is operating at scale today about $110 million, where we do expect to grow 10%-15% on a go-forward basis. Customers take our IP and integrate that into larger ASIC and SoC solutions, and the model here is that we sell the IP on a per-license basis to the customers. We do see a lot of customer diversification here, which ranges from start-ups all the way through to well-known semiconductor companies. If you look at our portfolio, today on the security IP side, we continue to see the growing importance of data security, especially as we move towards heterogeneous compute environments.
Our portfolio today is really built around leading-edge IPsec and MACsec technology, which helps secure data at rest and data in motion from there. On the interface side of the IP, what we have here is really a controller portfolio and how you'd really interface with the memory. What we really have here is really leading-edge technology solutions in GDDR, HBM, and PCIe, and we're very well positioned to continue to grow going forward in the Silicon IP business.
Okay, great. What are the growth vectors in Silicon IP? And is that driven by new product? Is that driven by new customers, content increase?
Yeah, I would say one of the key items to watch out for on the Silicon IP side is really R&D design starts. Again, customers are taking our IP and building that into the larger ASIC solutions, as I mentioned earlier on from there. In terms of a market perspective, the market is really well positioned against the data center and AI, given some of our technologies that I mentioned earlier. But we continue to see growing opportunities in the automotive space as well as the defense space from there. And we continue to innovate. We've just recently announced the GDDR7 technology solutions, HBM3E solutions, as well as CXL solutions. So we'll continue to innovate the portfolio to continue to drive that above-market growth rate that I mentioned.
How do you see the market and the demand for CXL evolving? And maybe at a high level, if you could remind us as to why CXL is now happening and how does it grow from here, and how do you benefit from that?
Yeah, sure. So that's a good question. So certainly, for Rambus, we have two vectors here. One is our silicon IP business. We provide leading controllers to a variety of CXL-based solutions. And with CXL, the important thing to remember is it's an interconnect standard and protocol here, so there's going to be a wide range of solutions under development, from signal retimers and switches to larger ASICs for CXL-attached memory, which is our second vector that we've talked about historically. You know, really driving towards a serial-attached memory solution based on CXL. You know, I think one of the things that we've seen as an ecosystem is that CXL, over the last couple of years, has pushed out to the right from initial expectations.
The evolution of the CXL standard and protocol continues to add more use cases. Certainly, we've seen a fragmentation of, you know, how a circle can be drawn around enough use cases to make sense for any chip solution. It remains important because the serial-attached nature of CXL allows you to, in the case of CXL-attached memory, specifically, augment system memory via a very small number of pins. So it's a, from a pin count perspective, a very cheap way to, you know, increase memory content for data center applications. It continues to be driven by a number of vendors, including ourself, with silicon solutions available to customers.
We look forward to it, even though it's been pushed to the right, here a bit.
Okay, and then you announced controller solution for HBM3E GDDR7. If you could expand around that, notably as it relates to AI, and if you can give us more feedback on your portfolio product, products.
Yeah, yeah, certainly. So the focus that we have on our silicon IP spaces, as Des talked about, is in data center applications and largely driven over the last number of years with our engagement with AI chip makers and suppliers. So you'll see HBM as the chosen memory for training, GDDR for inference, as we see targeted solutions coming out there. We've continued to provide the market leading-edge solutions here from HBM2 to, you know, work ongoing for the next generation into HBM4, which is starting to get some press here in the last couple of days publicly. So very interesting times. I think that, you know, what we see is the continued acceleration of these solutions.
We recently announced a GDDR7 controller solution. And then with HBM, just the absolute rate of time to market advantage that we offer, moving from 3, HBM3E, and on to HBM4, at a very rapid pace. So, providing customers those proven solutions to wrap into their ASICs and SoCs.
Great. We have just 15 seconds left, but maybe about your long-term financial model, any key points you would wanna highlight to the audience?
Yeah, I would say in terms of the long-term financial model, where we continue to see a lot of the growth opportunities, certainly is on the product side, as we've sort of talked about today. Matt's talked about MRDIMM-like opportunities, and that's where we see the future of the company sort of going, from there. And we feel as if we're very well positioned based upon the investments we've already made on the model from there.
Great. Thank you very much for presenting with us today.
Thank you, Tristan.