Director of Investment Banking Business for Morgan Stanley. I'm proud and honored to be on stage today with the team at Rambus. I've got Luc Seraphin, the CEO of Rambus, and Des Lynch, the Chief Financial Officer of Rambus, here with me. So Luc and Des, both of you, thanks for joining us. To start sort of super high level, Rambus has been around for a long time. Company started in 1990. Luc, you just crossed your 50-year mark as CEO in October of last year. Very different company today than what it's been historically. Can you perhaps walk the group through the journey that you've been on since you became CEO and where you are today?
Yeah, thanks. Thanks, Marco, for having us. As you said, Rambus started about 30 years ago. It's good to go back there. And as the name indicates, Rambus, we were developing the foundational technology used in memory interface. And we started the business 30 years ago as a patent licensing business. But over time, as you said, Marco, we transitioned the business to becoming a technology provider to the semiconductor industry and to the system industry. So if you look at the business of Rambus today, it's built on three pillars. We still have a patent licensing business, which is about $200 million-$220 million every year that uses that foundational technology. That business is very stable, very high margin. And it gives us the cash necessary to invest in the products and technology that we sell to our customers. So that's the first pillar.
The second pillar is what we call Silicon IP business. Silicon IP business is a business whereby we actually develop IP blocks that we sell to semiconductor companies. These semiconductor companies integrate those IP blocks into their SoCs, ASICs, or standard products. The IP blocks that we develop are very specific to things that are important to AI, for example. We develop IP blocks in the field of HBM and GDDR for memory, in the field of CXL and PCIe for connectivity, and as importantly, in the field of security. So if you look at this very focused IP portfolio, this business represents about $110 million annually. This is a business which market grows about 10%-15% a year. So that's the second pillar. The third pillar is standard semiconductor products. Here we're developing what we call buffer chips.
We sell those buffer chips to the memory vendors, namely SK hynix, Samsung, and Micron. They integrate those buffer chips onto their DRAM memory modules. That business has grown quite nicely for us. You mentioned a few years ago, in 2018, that business was about $38 million. Last year, it was about $225 million. So we established a very good track record with that product business. Going forward, we see a lot of opportunities in the memory subsystems in general. I think what AI creates is new architectures, new challenges in terms of addressing the interface between memory and processing units, and because of our heritage and DNA, that gives us a lot of confidence and opportunity going forward.
That's fantastic. Thanks, Luc. So let's spend a little time talking about the chip business, which I think has been sort of the primary and will be the primary growth vector as you think about the future, specifically memory interface. What does that market opportunity look like size-wise? And what's sort of your position in it today? And as you look ahead at next-generation technologies there, how are you positioned for that as well?
OK. So if you roll back to 2022, we estimated the size of that business for buffer chips to be around $800-$850 million. Last year, that market actually shrank by 10%-15% for many reasons that we can talk about. And we expect that market to rebound this year mid-single digit. So this is a market that's going to be around $700-$750 million. What we said on our call, on our last call, is that we estimate our share at the end of 2023 to be a little north of 30% in that market. Remember, we had 0% in 2018. And we expect to grow faster than the market in 2024. But if you look at the composition of our share in 2023, our blended share is north of 30%, as we said. But our DDR5 share is approaching 40%.
So that transition of the market from DDR4 to DDR5 has been beneficial for us in terms of share gain.
Can you explain to folks, just to give it some context, why that transition is so important to the industry, in particular as the data center evolution into AI that we're seeing?
So that transition is really, really important for people who know this business. The DDR4 generation lasted for seven years. So this is really a generational transition in the market. DDR5 technology for the memory brings more bandwidth, more capacity, better power performance, so all good things. But they're also associated with performance improvements on the processor side. They're associated with better connectivity on the processor side, higher count numbers on the processor side. So the whole industry benefits from that transition to DDR5. So that is really, really important for the industry as a whole, for processor vendors, for memory vendors, and for us. And that's a technology that is evolving very, very fast now. In the DDR4 generation of products, there was an upgrade of DDR4 every other year. Now this upgrade is happening on DDR5 every year.
The very fact that over the last few years, we have refocused the company onto resolving this memory bottleneck question has allowed us to develop those technologies much faster than we used to. It positions us in a much better position going forward.
Yeah, so excellent sort of transition question that that raises. So you're talking about the memory interface part of things. You spent some time talking about buffer. Let's talk about the future. We're talking about there's been a lot of focus around AI, around CPU, compute, and just really kind of core processing power. But there's angles of that that haven't been as focused on in the sector. And you're working on something pretty unique to that, which is CXL. Can you talk about CXL, your initiatives, that market in general, what the opportunity is with that, and then maybe your position within that?
Sure. So when the industry transitions to these DDR5 generation for general-purpose servers and to AI servers, this opens up opportunities for many more types of technologies, CXL being one of them. And the way to explain this is when you look at a new type of AI server architecture, the architecture is what we call heterogeneous in the sense that people develop GPUs, DPUs, specialized ASICs. And all of these chips have to talk to each other. And they have to speak the same language. And the industry has rallied around CXL as a way for these different chips coming from different vendors to talk to each other. So that's why CXL is exciting. Now, for us, the market for CXL is really today in our silicon IP business.
In our silicon IP business, where we develop blocks of IP and we sell to the market, we have a very large number of customers from the startups all the way to the large customers. They all have to implement CXL interconnect into their chips. So that creates demands for our CXL interconnect IP. Going forward, we expect a chip business to develop. But that chip business will develop probably at the time CXL 3.0 is going to be online. The first generation of CXL, we're going to see a large market, but a fragmented market. Everyone is developing their own chip with CXL interface. Everyone is working on the usage model. But the industry will have to coalesce around a kind of standard chip definition that will drive the demand further for chip opportunities. So we're playing in these two, I would say, phases.
We have very strong traction on the IP business today on CXL 2.0. We are ahead of the curve in providing CXL 3.0 IP to the market. We have developed, by the way, a CXL chip ourselves that is in the hands of CSPs, in the hands of customers. They're using it to assess the feasibility of CXL solutions. They're using it as a proof of concept to write their software models and everything. We have that insight into the ecosystem that will develop into a chip development that we will commercialize at the time of CXL 3.0.
When you say you have that insight into the ecosystem, is that primarily coming from your core customers historically, like folks like the Microns and the memory side?
It's from a much broader base. I think when you have these technologies like CXL or HBM or GDDR or security, you actually talk to a much broader set of customers. Anyone in the semiconductor industry who needs to develop an HBM interface because they want to talk to HBM memory, or if they want to develop a GDDR interface because they want to talk to GDDR memory, or they want to secure their silicon, they have the opportunity to talk to us. So we actually talk to a much broader set of customers than our traditional buffer chip customers. We also talk further down the value chain. We're talking to our customers. We're also talking to our customer's customers who, more often than ever, are defining their own chips. So that's what I call by having that insight.
Actually, our silicon IP business provides us this $110 million business that grows 10%-15% a year. But in addition to that, it gives us a very good insight into where the markets are going with these interfaces.
Right, because people come to you.
Because people come to us. They have to implement those interfaces. They are further ahead in understanding what they need for the future. We have insight into that.
How important are sort of hyperscalers to that part of the business and that sort of dynamic?
They are important. This is one of the shifts in the industry that we will benefit from. Traditionally, as you said, our buffer chip customers on the chip side, they're the three memory vendors. So with our buffer chip business, we had insights into what these three memory vendors do, what also Intel and AMD roadmap is going to be. But now, with these new AI architectures, with these new data center architectures, we see more activities from our customer's customers, in particular the CSPs. And we're talking to them. And that gives us, again, a good insight as to what we should develop for the future. Remember, memory interfaces is part of our DNA and our heritage. We've been doing this for 30 years. We don't talk much about our patent licensing business. But we have a very strong portfolio of more than 2,500 patents in that area.
We continue to develop foundational technology for memory interfaces, whether it's DDR, LPDDR, GDDR, HBM, CXL, PCIe. All of these are based on foundational technology that we keep investing in as part of our Rambus Labs group.
Would you say all those categories are going to be foundational and critical to the AI expansion in data center and beyond?
Yes. This AI thing is earth-shaking, I say, as everyone sees. But there's always going to be the fundamental question of how do processing units access memory in the most efficient way? And actually, if your DNA and your heritage is about, well, we have that experience, or we've had that experience for 30 years, about how to address flexible memory access, then we are well positioned to address any type of market that is going to develop around this. And you ask what we did over the last 5 years. One of the things we did is we sold a lot of activities that were not central to that so that we could focus on that. And now we're in a much better position with this acceleration of technology because that's the only thing we do, basically.
That allows us to move fast and to be agile, as we will see a lot of moves in the next few years.
Got it. So I haven't meant to ignore you here. But as you think about sizing the opportunity, particularly going back to the CXL part, we know what the memory buffer market looks like. We've defined it as $700 million. What does that generally look like for CXL for you folks, both as you think about the near-term IP part of it that Luc has been articulating, but as you get to 3.0 at CXL, and once you guys start playing in chips, what does the SAM look like for you?
Yeah, that's a great question, Marco. I think on the product side, as it relates to CXL, we hear many estimates out there from people that this could be a $20 billion market opportunity by 2030. I think given our focus on the memory controller play here, I would size this being about $600 million-$800 million by the time we get into the CXL 3.0 opportunity. So this adds a really nice market expansion opportunity for us. As Luc mentioned, we do have the leadership position in IP, which gives us the insight into what customers are looking for us going forward. We are very well positioned when this market really takes off in the next couple of years.
Got it. Okay, so just last part here on the silicon IP business, the go-to market. What is the strategy around the silicon IP business going forward? I know there's been a focus and an emphasis and a lot of good communication around the chip side of the business. Can you articulate in a high-level way what the strategy is going forward for the silicon IP business?
So for the silicon IP business, we want to continue to have a very focused portfolio of IP. As I indicated earlier, we're focusing today on HBM technology, GDDR technology, PCIe, and CXL technology on the interface side. And we also have a very strong portfolio on the security side. So the number one part of that strategy is to stay very focused on those technologies that are required not only for the data center, but these technologies are going to be required in automotive, in government applications, and in client applications. And the second part of the strategy is to stay ahead of the curve from a performance standpoint. You see the announcement we announced recently, HBM 3E. We were typically the company that was announcing HBM at highest speeds ahead of everyone else. That's the same for PCIe type of technology or CXL type of technologies.
We want to be ahead of the curve so that we can engage with customers ahead of the curve. We can be embedded with their systems ahead of the curve. We will stay in a very focused portfolio. We will expand our technology leadership in that focused portfolio.
Got it. So just sticking with that business, great point on HBM 3E. Who are the targeted customers in that space? And is there perhaps a mistaken perception that it's the smaller accelerators? Is there a larger sort of target customer opportunity there?
The nice thing for Rambus, in particular with the Silicon IP business, is that we have a broad range of customers. As much as we have a concentrated customer set for our Buffer Chip business with the three memory vendors, if you look at our Silicon IP business, we have a broad range of customers. They range from the small startups all the way to the very large Fortune 500 companies. As I said earlier, it's not only the traditional semiconductor vendors that are our customers. One step further down the value chain, we also have customers developing their own chips, like the CSPs. A broad range of customers, a broad range of applications. Our business model is we sell licenses to these customers. We're kind of agnostic as to whether they succeed or not, especially for the small customers. We hope everyone succeeds.
But that's the type of customer set we have. It's concentrated today in data centers. We have a lot of traction in automotive as well because you need more and more connectivity in these automotive systems. We have very nice government applications, especially for our security portfolio. It's a broad range of markets.
Security, let's talk about that for a second. But as it relates to Heterogeneous Compute, and then we can come back to sort of M&A and how that has been part of the security strategy. But as you move into Heterogeneous Compute environments, what is the importance of your security solutions?
If you think about these AI systems, it's all about processing data. The last thing you want to be seeing is your data being hacked or compromised. Security technology is everything to do with this. We have two types of security technologies, what we call security at rest. When data sits in a chip, you want to make sure that this data cannot be hacked, cannot be compromised, and is authenticated. That's the first part. We've been in that business since 2011 when we bought CRI. We have a long history and heritage in understanding those technologies. The second part of the technology is what we call data in movement. When you move data from, let's say, one DPU to memory or one DPU to another SoC, you want to make sure that when that data is transferred, it cannot be hacked.
It cannot be read and all of that. So this is becoming central to everyone developing these new architectures in the data center. We address data in motion through the acquisition of Inside Secure a couple of years ago. That's what we did. And with that portfolio of data at rest and data in motion, we can allow our customers to make sure that that data is secured. And it's really, really important. There's more and more data being processed. The use of that data has great implications. If you look not in the data center, but in the car, if you read sensors from the car and you cannot authenticate that the data you read is actually coming from that sensor, that could be very dangerous in an autonomous car. So these technologies are becoming critical. This has fueled the growth for our silicon IP business.
Actually, people don't know this. But now the majority of our business in the silicon IP is coming from the security. We have fewer competitors. And there are more and more people who move these in-house developments onto buying it from us.
Yeah. I mean, a lot of folks are focused now on transmission of this data efficiently. I think safety is one element of that. So last year, you guys launched Quantum Safe Cryptography IP. How is this coming to market?
Yeah, so Quantum Safe Cryptography IP sounds really good. But really, what it means is when quantum computers are available, then all of the cryptography algorithms that exist today can actually be cracked. And that's a huge, huge threat for the whole industry. So quantum computing cryptography is actually the development of algorithms that are solid enough so that they cannot be cracked by quantum computers. So that's one of the things we have developed. And we were the first to announce this. And this is another example of us wanting to be at the edge of the technology development in the focused portfolio that we have. So when quantum computers are out there, we will have the ways of protecting ourselves against these quantum computers trying to crack your system.
Wonderful. Okay, so , I think there's a lot of opportunity in the future, a lot of investment required. As you think about the capital allocation strategy for the company, you've got R&D reinvestment, which you guys have done a nice job of. You've got return of cash to shareholders, which you've also done a nice job of. And then M&A. And M&A has been sort of a consistent, I'd say, thoughtful strategy for you guys so far. How do you think about those three today? And in particular, circling back to the return of cash to shareholders, quantifying that, generally how you folks think about it, and sort of guiding just in general going forward?
Yeah, it's a great question, Marco. What I would see is we've had a very consistent approach to capital allocation across the years, really fueled by a robust balance sheet and continued strong cash generation. This is really built around three pillars: organic investment, inorganic investment, as well as capital return, as you mentioned from there. Organically, we'll continue to fund the high-growth opportunities ahead of us. I think Luc touched upon that today with CXL and some of the opportunities within the memory interface chip. And it's important that we invest the right levels to make sure we continue to drive the top line from there. And I think we've got a good track record there. Inorganically, we have been acquisitive.
We've made 5 acquisitions in the last 4 years, which has really enabled our silicon IP business to get to scale, to the $110 million that Luc talked about. We continue to have a rich funnel of M&A opportunities ahead of us. But we'll continue to be disciplined in that approach. And it's important to us strategically, operationally, and financially that this M&A opportunity makes sense to us. And lastly, on the return of capital to shareholders, we do have a commitment of returning 40%-50% of our free cash flow back to shareholders. Just last week, we did announce a $50 million ASR from there. And if you look at our returns over the last 3 years, it's been about $350 million, which equates to about 56% of free cash flow from there. So we do have that good track record of return to shareholders.
I would say going forward, we'll continue using the same playbook. It will focus on the three pillars that I've mentioned here. We have a good track record of capital allocation back to shareholders from there.
Yeah. So for both of you, since you've been so heavily involved in the M&A, both decision-making and integration, how would you kind of score out how you guys have done as far as integration, the businesses you've targeted, where you are on your journey as M&A players and consolidators?
Let's start. M&A has always been for us not only an opportunity to expand our business on the top line, but also to accelerate our roadmap. We've been very clear for the last five years about the strategy we wanted to take. We not only acquired companies that allowed us to stay ahead of the curve on these technologies, but we also divested things that were not central to what we do. And when you look at the market today in the data center, the acceleration in AI, the acceleration in the rollout of DDR5, the new companion chips that you see on modules, being that focused was really, really important for us.
So we acquired companies that allowed us to be faster and more efficient in the areas we have chosen to address. But at the same time, we have divested things that were not central to what we do.
Our customers are very happy with this. They see us investing in technologies that matter to them. That's the approach we've taken. That's the approach we will continue to take. As Des was saying, it's an active process. We always look at these things from a financial standpoint, of course, from a strategic standpoint, as I just explained, but also from an operational standpoint. We want to make sure that the culture, the chemistry, the integration work well so that we don't compromise our core business. With the acceleration we see in AI, the acceleration we see in the data center, the potential we see in automotive and other segments, we want to make sure that we don't compromise that core business that creates so much potential for us.
That's great. So talking about companion chips, going back to what you mentioned. So I'm going to dig into that and expanding away from the non-core. What are sort of the companion chip opportunities, particularly around DDR5? I think we obviously see something there. But what do you see?
Yeah, so the modules for standard servers are defined by JEDEC. And when the industry moved from DDR4 to DDR5, the JEDEC members decided that some of the functions that were originally on the motherboard, DDR4 generation of products, some of these products have to move to the module for technical reasons. So there are chips now on the DDR5 modules that you didn't have on the DDR4 modules. And these are Temperature Sensor, SPD Hub, and power management chips. And as we saw that evolution in the market, we invested into those products. So we are shipping today two of these chips in small volumes, Temperature Sensor and the SPD Hub.
We have invested in the power management chip by actually building a team, hiring people from the industry because we believe power management chip is going to add a lot of value to the DIMM, to the module, I would say. That's a technology that we want to secure internally. We have sampled the market with our power management chip in high volumes now. We get very good feedback. All of these products are going to start contributing to our revenue in the second half of this year and continue to ramp into 2025.
That's the timing on the contribution. Can you size for folks what the market opportunity is for those chips and then the expectation for you folks on revenue as far as just general quantification is concerned?
Yeah, I would say that in total, the companion opportunity offers about a $600 million market opportunity across the three chips, Marco, from there. What we've targeted is getting towards a 20% market share on the companion chips. What we will see is a growing revenue contribution going into the second half of this year and going into sort of 2025. So we're very excited about the opportunity that this presents. We are shipping in low volume today, the SPD Hub and Temperature Sensor. That will ramp in the second half. Luc talked about the progress that we've made on the power management device. We're sampling in high quantities today. That will continue ramping into the second half as well. But this offers a very nice adjacent chip opportunity next to our RCD and buffer chip.
We're very well positioned to continue to grow the revenue going forward.
OK, so look, let's kind of transition to the obligatory near-term environment and financial questions. So you think about historically, 2023, there were some headwinds around the chip business in general, around DDR4. Maybe we can describe those for the group and provide an update on the inventory situation as you look at how this plays out in 2024.
Yeah, I would say that we certainly, DDR4 has been a headwind for us. The last quarter that we shipped meaningful volume of DDR4 was Q1 of 2023. Customers did build up excess inventory on DDR4, which was a function of the supply chain challenges of 2021 and 2022, as well as the delayed rollout of DDR5 from there. What we've seen since then is that we've shipped out minimal amounts of DDR4 in the period of Q2 through Q4 of last year. I remain encouraged about the fact that inventory levels that the customers have came down in each quarter of sequentially each quarter from there. What we did acknowledge on the recent earnings call is that we do expect that the inventory digestion to continue to last through the first half of 2024.
Then we'll see a modest recovery of DDR4 coming back into the model in the second half of the year. I still expect that DDR4 will have a long tail of demand, which will continue for 24 months. This inventory digestion is taking a bit longer than we had anticipated from there. I think it will be into the second half where we will see some of the recovery of DDR4 back into the model, Marco.
And how much of that is related to there's been this bifurcation in thinking about General-Purpose Server and AI Server? How much of those dynamics are driven by the General-Purpose Server part of things?
Yeah, I would say that that is the primary driver of the general-purpose market being soft. I think what we've acknowledged is in the first half of the year, the general-purpose server market will remain soft. And I think that that's consistent with others within the industry. I think we've heard from Intel and AMD having similar comments on that. I do remain encouraged by other companies recently, HPE and Dell, really commenting about the return of the general-purpose server. So we're seeing that momentum, which gives us confidence that this will come back into the model on the second half of the year and continue to fuel that product growth going forward into the second half of the year and into 2025 and beyond.
Yeah, and obviously, a boon and a balance to that would be AI server-related stuff, which seems to be kind of going in the opposite direction in many ways. How does that play into the way you're thinking about forecasts and how far out?
So I think something to understand is we do see, as Des is saying, a rebound of general-purpose servers in the second half of the year. Remember, there's so much legacy software on general-purpose servers, this pent-up demand for a refresh. I talked earlier about DDR5 being online now. But DDR5 comes also with higher core count processors, better connectivity, better performance. So I think all applications that are not AI are going to need and all the software that is out there, the refresh is overdue. So we're very, very confident with this rebound in the second half. The second thing that we need to understand is that, to some extent, AI servers have been a good catalyst for us because in every AI server, you have these ranks of GPUs and HBM memory to do the number crunching. But you also have two standard servers.
The two standard servers are there to do the storage, the caching, the grooming, the preprocessing of the data. So the data is actually usable by these ranks of GPUs. So we don't oppose AI servers and general-purpose servers. We see the general-purpose server market continuing to grow with a kick in the second half of the year. But we see AI servers as also being a pool for standard servers. There's a lot of preprocessing that is better done by a general-purpose CPU than it is done by a GPU. GPUs are designed to work on matrices, designed on highly parallel algorithms. This is very, very different than a general-purpose server. So we see this as an opportunity.
As I said, our DNA and our heritage and the very fact that we actually talk to a lot of customers who are a silicon IP business, it allows us to be very agile in that market and to react very fast.
Yeah, I know you could probably spend all day talking about that dynamic. But why don't we open it up to questions from the audience with a few minutes left here?
For your security IP, can you help me understand how much can you shrink it down on the customer application side? Is it qualified at the foundries at certain nanometers?
Yeah, so that's a very good question, if I may use the opportunity to expand the answer to this. All of our Silicon IP is what we call Digital IP. So we are agnostic to the node or the foundry. And so once we develop a piece of IP, whether it's in security, in CXL, in HBM, in PCI, or in GDDR, once that IP has been developed and tested, it can be sold into whatever target node you want, TSMC 7, Samsung 10, GF12. And that allows us to accelerate our developments and to touch many more customers than we used in the past. You probably noted that last year, we sold our PHY business to Cadence. And this has been a win-win for us and Cadence. The PHY business is different. The PHY business depends on the node.
So if we defined and designed a piece of IP, let's say for TSMC 7 nanometer, then we had to redesign for another target. So the development costs were going up. And the number of customers were shrinking because there are fewer and fewer customers that can afford a development in 5 nanometer or 3 nanometer. So that business was better placed in a company like Cadence, who has the scale. We also have the tool suite to work on that. And they're very happy with that business. On our side, we are very happy to have done that as well because not only can we work with Cadence in offering complete solutions, but we can also integrate our digital controller IP to other PHYs. So it's been a win-win in the industry, win for us, win for Cadence, win for the industry.
But our Silicon IP, whether it's security or interface, is Digital IP that is node-agnostic.
With your 40% share in DDR5.
With your 40% share in DDR5 and what you said as an annual upgrade cycle of DDR5 to come, help us understand the effect of that annual upgrade of DDR5 on your market share in terms of risk opportunity?
Yes, that's a very good question. What we've seen in DDR5 is that the rollout of sub-generation of DDR5 is moving much faster than in the DDR4 generation. So if you look at the market today, Gen 1 is in production. Gen 2 is going to be in production in the second half of the year. People are qualifying Gen 3. We announced our first Gen 4 chip in December. So you see an acceleration of these sub-generations. And I'll go back to the strategy of the company. Because we have refocused the company on those technologies, we were able to actually double the speed of our development. We've been able to actually go and qualify all of these generations in parallel. And the market will find out which generation is going to be more successful than the others. But we are present in all of them.
That's what we wanted to secure. That's why we wanted to prioritize our DDR5 buffer chip first so that we are in all generations with all customers, then the power management chip because it's a sticky product as well, more complex to do than the other companion chips. So for us, it's an opportunity. It's an opportunity because we can run that race, we believe, faster than our competitors. We're more agile. We're focused on this. Our competitors are much larger companies. They're not U.S.-based companies. They're not as close as we are to our customers or customers' customers. So that's an opportunity for us. It's also an opportunity on the ASP refresh. As every time there's a new generation, we have an opportunity to refresh ASP. And then when we go to production, this decays.
This acceleration of ASP refresh allows us to maintain the stated gross margin targets of 60%-65% on the product side.
Great. All right, well, with that, we are just about up on time. So thank you again, gentlemen. Really appreciate your time today.
Thank you, Marco.