Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

Bank of America Global Technology Conference 2025

Jun 5, 2025

Duksan Jang
Analyst, Bank of America

Thank you for joining us today. My name is Duksan Jang. I'm part of the U.S. Semiconductors and Semicap Equipment team here at Bank of America. I'm very delighted to host the Rambus team today. Luc Seraphin, Chief Executive Officer, and Desmond Lynch, Chief Financial Officer. Thank you so much for coming.

Luc Seraphin
CEO, Rambus

Thank you, Duksan.

Duksan Jang
Analyst, Bank of America

Did you have any disclosures that we might have to make, or are we?

Luc Seraphin
CEO, Rambus

Yeah, we just encourage everyone please to read our documents on file with the SEC. They cover a lot more than the company than we will talk about today, Duksan.

Duksan Jang
Analyst, Bank of America

Awesome, awesome. I think we can start high level. Can we talk about the state of the union? What are you seeing in the demand environment today, especially perhaps versus the beginning of the year since we've had so much ups and downs this year?

Luc Seraphin
CEO, Rambus

Yeah, so we're very pleased with how the year started for us. We do see some very nice tailwinds for the server market, both in the traditional server market and the AI servers. That drives demand for more bandwidth and capacity in those servers, which, as a consequence, drives demands for our products. We had a very nice first quarter on the product side with 52% growth compared to the same quarter last year. We do also see demand for our silicon IP business. As people develop custom chips for AI, they need high-level security IP. They need high-speed interconnect controllers, as well as high-speed memory controllers. The overall AI environment and the additional traditional server environment has been quite good for us at this point in time.

Duksan Jang
Analyst, Bank of America

Awesome. I'll get back to the silicon IP business, but starting with the product side, as you mentioned, very good quarter in the first quarter. How should we think about the overall market size, just stepping back, and if you can talk about the competitive dynamic?

Luc Seraphin
CEO, Rambus

Sure. We traditionally started by building what we call RCD chips or buffer chips that are little controller chips that sit on memory modules and work on the interface between the processors and the memories. The market for this chip, we estimate, is about $750 million in size. The nice thing with the DDR5 generation of products for modules is that in addition to the RCD chip on the module, DDR5 demands that we have what we call companion chips, chips that did not exist on the module in DDR4, the prior generation. These companion chips add an additional $600 million SAM to the $750 million SAM. After that, what we see is some of the requirements that we see today on these server environments are going to be demanded as well on the client space.

High-performance client systems are going to require chips that are similar to the RCD chip. We believe that we'll add a couple of million dollars more of SAM to that. We do see a SAM expansion that is coming from the fact that there's more content on the server memory modules. There's also an adoption of similar technologies on the client side.

Duksan Jang
Analyst, Bank of America

What would you say are the biggest drivers for this market? People talk about the memory channel, a number of channels, the bandwidth, the capacity. What would be the biggest driver for Rambus?

Luc Seraphin
CEO, Rambus

I think the first general comment we would make is that whether it is an AI server or a traditional server, there is very high demand for more bandwidth and more capacity. The reason is that server technology moves faster than memory technology. You have more and more cores on every CPU. Every core needs its dedicated memory. That drives demand in general, whether it is an AI server or a traditional server, for more bandwidth and more memory. What it translates into for us is that in the DDR4 generation of product a few years ago, we had to develop an RCD chip every other year. Now, today, in the DDR5 generation of products, we have to develop a new chip every year. The cadence has been multiplied by two. As I said earlier, we also have to develop those companion chips.

The drivers are really the growth of AI servers, the growth of traditional servers, the number of channels per CPU. In the DDR4 generation, there were about eight memory channels per CPU. In the DDR5 generation, it's a mix of eight and twelve, converging to twelve. We believe that at the end of the DDR5 generation of products, people will probably converge to 16 channels. It means that you have 16 memory channels on each CPU. The other driver is how many modules you can populate per channel. Some applications require one module per channel. Other applications require two modules per channel. That's how the market grows. There's a growth of the number of channels and also the number of modules you can put on every channel. That's how the market is growing.

Duksan Jang
Analyst, Bank of America

Just going back to my earlier question on competition, I know you guys are the leaders in the RCD chips. Thank you, Saad. About 40% exiting last year. Your goal is 40%-50%. What do you think needs to happen for you to reach the high end of the target or even push beyond that?

Luc Seraphin
CEO, Rambus

Yeah, that's a good question. In the DDR4 generation, we started with 0%, and we walked our way up to about 25% share. In the DDR5 generation, we enjoyed a little north of 40% last year because we invested very early in every sub-generation of product. That's really, really important in that ecosystem because the qualification processes in that ecosystem are very complex and take a lot of time. If you are the first one to introduce a new sub-generation of product into that ecosystem, you actually march out the resources of every ecosystem member, and they work with you in getting that product out. We've been very good at investing very early in every sub-generation of product. That's what took us from the 25% share that we enjoyed in the DDR4 generation to the 40% plus share in the DDR5 generation.

Now, the ecosystem, for reasons of security of supply, we'd always want to have multiple suppliers, typically three suppliers. We have two competitors. One is Montage, a Chinese company, and one is Renesas, who bought that business from IDT. I think the ecosystem will always require to have this type of arrangement because these little chips just sit between processors and memory. If any one of these vendors fails for whatever reason, then you block the whole supply chain of that ecosystem. I think we can continue to grow our share from 40%. Our goal is to get to about 50%. There is going to be some sort of saturation, naturally, in terms of share. We have to count on the market growth, but more importantly, the content growth as we introduce all of these companion chips on the same module.

Duksan Jang
Analyst, Bank of America

Understood. Talking about content, and I know during the first quarter earnings call, you mentioned you're generally CPU agnostic, whether it's x86 or ARM. How should we think about just given ARM CPUs tend to be generally higher number of cores? Does that benefit you? Or if CPUs like NVIDIA Grace, they use LPDDR, how does that work into your content?

Luc Seraphin
CEO, Rambus

The two different questions, whether it's an ARM core or an x86 core, we truly are agnostic. What people are looking for is to add more and more cores for reasons that have to do with computational power. The more cores they add, the more memory they have to add. All of this is good for us. Whether it's ARM or x86, I think we actually welcome that competition. We welcome the competition between the ARM-based processors and the x86, and we welcome the competition within each one of these camps because they all drive demand for more buffer chips. With respect to LPDDR, this is a niche market today. LPDDR is typically used in client applications. It brings some benefits, in particular in terms of power. That's why it's called low-power DDR.

It also comes with challenges that have to do with reliability, with the physical requirements that you have there. Our company, Rambus, has been in that business for 35 years. Every leg of our business has to do with memory technologies. We do have a patent portfolio that covers LPDDR and DDR. We do have our silicon IP business that has cores in LPDDR and in DDR. When it comes to products, the vast majority of products today are DDR. If there was a compelling reason for growing an LPDDR solution on the product side, we would be ready to do that.

Duksan Jang
Analyst, Bank of America

Understood. And then staying on top of this AI topic, we're obviously seeing a lot of demand moving away from training and more towards inference. Does that also have an impact on your product cadence or content?

Luc Seraphin
CEO, Rambus

It will be another tailwind for us. Typically, inference systems are simpler than training systems. A lot of things that are currently being used on GPUs and HBM can actually be run on more standard processors on the inference side. That will drive demand for us. The nice thing about this market is that whatever processor you use, because they have to use DRAM on the other side, those DRAM interfaces are standard interfaces. Whether that DRAM interface is on a standard processor, ARM-based or x86-based, or whether it is on a custom chip that people develop for AI inference, for example, you will have the DDR interface. On the other side of the DDR interface, you will have a module with that standard product. All of these are good tailwinds for us. We are looking forward to enjoying the rise of AI inference.

Duksan Jang
Analyst, Bank of America

I did want to just go back to the earlier LPDDR question, just because when we talk to ARM, when we talk to NVIDIA, they obviously have very aggressive outlooks for their Grace CPU. If you were to develop a product on the LPDDR for the server side, how long would that generally take for you to obviously develop and then ramp?

Luc Seraphin
CEO, Rambus

The first thing I would say is that the current LPDDR solutions are soldered solutions. They're not on modules. You don't have the equivalent today of a buffered chip. It's a bit like HBM. Today, HBM doesn't require a buffered chip. We watch that. To the extent that the market goes into solutions where LPDDR can be reliably integrated on a module as opposed to being soldered, the development of the chip would be similar to the development of the buffered chip. These developments last a couple of years. The qualification in the market takes time as well. For us, that's very similar technology whether it's LPDDR or DDR. That's a very similar environment. It's chips that we have to develop for modules. The module environment is a very specific environment in terms of thermal requirements and noise requirements.

That's an environment we know well. The ecosystem is an ecosystem we know well. The vendors of LPDDR memories are the same vendors as the DDR memory. The end users are going to be the same end users. The whole ecosystem is very similar. As a consequence, it would take us similar time. This push, as you say, it's a very interesting concept. That's an ecosystem that will have to converge on a standard solution because every chip has to talk to every chip, and every one of those chips has to talk to every memory module. The industry will have to converge onto a standard solution, just as we do today with buffered chips, typically through JEDEC. We are an active member in JEDEC and part of those discussions.

Duksan Jang
Analyst, Bank of America

Got it. And then on to everyone's favorite topic, tariffs. You said patent licensing is not affected. On the silicon IP and product side, it's tougher to gauge the indirect impact. How should we think about the overall impact today, just given obviously every day we're hearing so much more, but compared to, say, at the end of April when you reported, I think a lot of the nuances have more stabilized. How should we think about it today?

Luc Seraphin
CEO, Rambus

If you look at our business, our patent licensing business, as you correctly say, is completely immune. These are legal agreements that are long-term agreements with our customers. There is no exchange of technology there. That business is about $210 million, 100% margin. That gives us a very solid base in terms of protection against tariffs. The silicon IP business is also not affected by tariffs. We actually provide IP to our companies. Actually, our exposure to China, even with our IP business, is very small as a company. It is a low single-digit percentage of our business. Even if there were questions about tariffs with silicon IP, and there are not, that would be having minimal impact on us. The question is about our product business. Our product business last year was about $250 million.

We review our situation with respect to tariff almost on a weekly basis. At this point in time, we are not affected. One of the reasons is that our front-end supply chain is in Taiwan. Our back-end supply chain is in Taiwan and Korea, not in China. We are selling our products to the memory vendors who typically buy them in Asia. At this point in time, these products are exempt at this point in time. Things can change, but we are under these exemptions. At this point in time, there is no impact. There might be indirect impacts that we are watching. One is if other companies shift their supply chains away from China to other areas in Asia, will this create a supply crunch that will indirectly affect us with our suppliers? The second thing is the overall uncertainty in the market.

Are these tariffs going to destroy, I would say, demand? These are indirect effects for us that we're watching. In terms of direct effects, there's no direct effects at this point in time.

Duksan Jang
Analyst, Bank of America

Understood. Just going back to the China exposure, obviously, we're hearing the EDA companies being left out of that market. Would you say that's also a similar risk for you on the IP business?

Luc Seraphin
CEO, Rambus

There's always this risk, but that's not something that is new to us. As much as we review tariffs on a regular basis, we also review restrictions with respect to IP on a regular basis. We've been doing this for years well before tariffs were in place. At this point in time, we've had very, very little impact. As I indicated earlier, our exposure to the China market is very small. It's low single digit. Even if we had a 100% impact, that would have a low single-digit impact on our business. Today, there's no impact.

Duksan Jang
Analyst, Bank of America

Understood. Moving on to the companion chip opportunities. You launched eight new chips last year. I believe you said you expect about low single-digit contribution in the first half. How should we think about it as we go into the second half? Obviously, next year, we should see some more of a ramp. If you can either quantify or either qualify or have some description for us.

Luc Seraphin
CEO, Rambus

Yes. As we indicated earlier, when the market moved from the DDR4 generation of memory modules to the DDR5 generation of memory modules, the industry, through JEDEC, by the way, everyone has to agree, the industry decided that some functions that were sitting on the motherboard in the DDR4 generation of products had to be implemented on the memory module instead in the DDR5 generation of products. When you move from DDR4 to DDR5, on the module, instead of having one RCD chip in the DDR4 generation, on the DDR5 generation, you have one RCD chip, one power management chip, two temperature sensors, and one controller chip, which we call SPD Hub. When that transition happened in the market, our strategy was to make sure that we secure the RCD chip market share first because that's the most complex chip to make.

That explains why we could move from 25% share in DDR4 to more than 40% in DDR5 because we wanted to focus on that. That transition was extremely strategically important for us because that's the most complex chip. Then we started to develop our companion chips. The next most complex chip on that module is the power management chip. In the first generation of DDR5, we were not playing. There were a lot of players. Actually, a few have survived. A lot have not survived. One of the reasons is that doing a power management chip is one thing. Doing a power management chip in a module environment where it is very noisy, it is very tight in terms of real estate, it is thermally challenged, is a different thing. We invested in our power management chip team in-house development about more than two years ago.

We've introduced our power management chips last year in April. We have also introduced the other companion chips. Now, like everything in that market, you have to intercept a platform from Intel and AMD. That's how the market works. These platforms that use our generations of power management chip and companion chips are going to start ramping if they're not late in the second half of this year. The way to look at it is you were right today. It's a low single-digit portion of our revenue as we ship pre-production qualification quantities. When these platform ramps towards the second half of this year, we're going to see our share growing. We're going to see the bulk of that growth in 2026.

We've been public about saying that for these companion chips, our objective is to reach about 20% share at this point in time because the competition landscape is a bit different. Obviously, we'll try to do more than that.

Duksan Jang
Analyst, Bank of America

On your MRDIM chipset, obviously, qualifications are ongoing. It probably depends a lot more on the customer side when they ramp their products. What would you say is a realistic ramp timing for Rambus? When would this be more material for you?

Luc Seraphin
CEO, Rambus

Yeah. So for people who do not know, the MRDIM chipset, it is a very interesting concept. It is the idea that on a memory module, you actually double the amount of memory, and you multiplex the access of the memory onto the memory bus. What it allows you to do, or the industry to do, is with exactly the same infrastructure, the same CPU architecture, you can picture the idea of removing a standard DDR5 module and plugging in an MRDIM instead. All of a sudden, double the capacity and double the bandwidth. It has a lot of traction because, as I said earlier, people are always looking for more bandwidth and more capacity. It had to be, again, the industry had to converge on the exact definition of this MRDIM.

That's why when we announced it, we say it's the first JEDEC compliant because that kind of gives you security that the industry is going to use it. As we explained for the companion chip, this MRDIM is linked to a platform launch. This is a platform launch that will happen in 2026. We have developed the products. We have sampled the products to our customers. They're going through all of their lengthy qualification process. The product will ramp with the ramp of the following generation of CPUs, which at this point in time is scheduled for the second half of 2026. We're going to see the initial ramp of those products in the second half of 2026.

Duksan Jang
Analyst, Bank of America

Got it. Last one on products, if we talk about the client opportunity, and you've alluded to this earlier as well, but the clock drivers, how should we think about the opportunity there and its ramp timing?

Luc Seraphin
CEO, Rambus

Yes. Why do we go there first? Some of the challenges in the data center have to do with the environment. You have to transmit signals faster and faster between the processor and the memory in a very noisy environment. It is very tough to do, especially when you have to double the speed at such a fast pace. That is why we developed RCD chips on the CPU side. The RCD chip is all about what we call signal integrity. It is about transmitting very small signals in a very noisy environment without losing data. Those requirements did not exist or do not exist today on the client space. As client systems become more and more performant in terms of speed, what we see is that on the high-end side of next-generation platforms on the client side, we are going to face similar challenges in terms of signal integrity.

We're going to have to have chips that actually reconstruct those signals as we do on the CPU side for data centers. That is what the Clock Client Driver is. It is going to address a very small portion of very high-end PCs, if you wish. The market is going to be modest. You expect the market to be about $200 million for that. The ramp is starting now. It is going to grow quarter over quarter through 2026. Strategically, what is going to happen is as time passes, there are more and more client systems that are going to require that signal integrity function. The client systems are also going to require some elaborate power management functions. What we see is that we are going to see the technologies we develop for the data center waterfall into the high-end client systems.

With time, more and more of these client systems are going to use these technologies. CKD is the first one of that building blocks that we are building for the future.

Duksan Jang
Analyst, Bank of America

Got it. Moving on to silicon IP, obviously, the HBM market is the one that's driving. How should we think about your content when the HBM3 stack moves from 8 high to 12 high and then onto HBM4? Is there an uplift there?

Luc Seraphin
CEO, Rambus

Our silicon IP business, for people who are not too familiar with this, this is a very different business model. We actually develop memory controller, in the case of HBM, memory controllers, and we sell a license of these memory controllers to typically semiconductor companies. These semiconductor companies integrate this into their chips, whatever chips it is. It may be an ASIC. It may be a CPU, a DPU, a GPU, a custom ASIC. What this means is that we have to develop those controllers probably between 18 months and a couple of years. We have to engage with those customers 18 months to two years before those chips are actually in the market. In terms of HBM3 and HBM4, we've been engaged with customers for a couple of years now as an HBM3.

We announced HBM4 last year, and we were engaged with customers on HBM4 last year already. We actually, I think, indicated when we commented on our Q4 results that one of the reasons we had good silicon IP results in Q4 was actually driven by the demand for HBM4 controllers. Our strategy on HBM has always been on these controllers to be a little higher in speed and performance than what the market requires. We have very early engagements with our lead customers ahead of what the market needs because we have to be like two years ahead. The size of the stack does not really drive our development, but the speed really drives the development. We always try to be at a slightly higher speed than what the market requires.

The demand for AI training in particular, where you have GPUs using HBM memory, drives the demand for HBM silicon IP controllers. What we have to understand is that in a GPU HBM environment, there's no equivalent of a buffer chip. There's not a chip that sits between the GPU and the HBM memory. As you said, there's a stack of memory. Inside the GPU, there's an HBM controller that we sell as silicon IP that drives the connection to this HBM memory. It's been a good driver of our growth of the silicon IP business. As you know, our silicon IP business is about $120 million a year. We say it's growing 10%-15% a year. Part of this 10%-15% has been actually driven by the demand for HBM over the last couple of years.

Duksan Jang
Analyst, Bank of America

Got it. I know we're running out of time, but an important question for Des: as we think about the margin trajectory, Q1 was a little bit weaker on the product side. You have a lot of different factors going on. You have the price negotiations, the cost downs, the price erosion. How should we think about the second half outlook? And into 2026, you also have the companion chips ramping.

Desmond Lynch
CFO, Rambus

Yeah, it's a good question. I would say on the product gross margin side, we have a long-term target of 60%-65%. If you look over the last sort of three years' annual performance, we've been operating at 61%-63% from there. Certainly within our sort of targeted range. We're very pleased with how we've been able to operate. This is a healthy margin for a chip business. What we've said is we've done a really nice job as a company of being disciplined on the price side as well as being able to continue to make manufacturing cost savings to maintain that sort of margin level. As it relates to the new product contribution, that will be contained within the overall 60%-65% sort of gross margin target.

Obviously, any given quarter, depending upon mix and where the products are within that cycle, the margin can move around a little bit. I think in the long term, we have a good track record of delivering on the product gross margin side. That's something we'll continue here sort of going forward from here.

Duksan Jang
Analyst, Bank of America

Awesome. I think we've run out of time. Thank you so much for coming. Thank you for the audience as well.

Luc Seraphin
CEO, Rambus

Thank you.

Desmond Lynch
CFO, Rambus

Thank you.

Powered by