I'd like to turn the conference over to John Allen, Interim Chief Financial Officer. You may begin your conference.
Thank you, operator, and welcome to the Rambus first quarter 2026 results conference call. I am John Allen, Interim Chief Financial Officer at Rambus. On the call with me today is Luc Seraphin, our CEO. The press release for the results that we will be discussing today has been filed with the SEC on Form 8-K. We are webcasting this call along with the slides that we will reference during portions of today's call. A replay of this call can be accessed on our website beginning today at 5 P.M. Pacific Time. Our discussions today will contain forward-looking statements, including our expectations regarding projected financial results, financial prospects, market growth, demand for our solutions, other market factors, including reflections of the geopolitical and macroeconomic environment and the effects of ASC 606 and reported revenue, among other items.
These statements are subject to risks and uncertainties that may be discussed during this call and are more fully described in the documents we file with the SEC, including our 8-Ks, 10-Qs, and 10-Ks. These forward-looking statements may differ materially from our actual results, and we are under no obligation to update these statements. In an effort to provide greater clarity in the financials, we are using both GAAP and non-GAAP financial presentations in both our press release and on this call. A reconciliation of these non-GAAP financials to the most directly comparable GAAP measures has been included in our press release, in our slide presentation, and on our website at rambus.com on the Investor Relations page under Financial Releases. In addition, we will continue to provide operational metrics such as licensing billings to give our investors better insight into our operational performance.
The order of our call today will be as follows: Luc will start with an overview of the business. I will discuss our financial results, and then we will end with Q and A. I will now turn the call over to Luc to provide an overview of the quarter. Luc.
Good afternoon, everyone, and thank you for joining us. We opened 2026 with a strong first quarter, meeting our financial targets and broadening our portfolio to address the accelerating demands of AI. The quarter reflects solid momentum as we execute against our roadmap to support long-term profitable growth for the company. This is an exciting time for Rambus, and we are well positioned to capitalize on the market trends in the data center and AI. For decades, we have developed foundational technologies and solutions across a wide range of memory and interconnects. That heritage positions us well as systems become more diverse, memory dependent, and performance driven. To give more context, there are several market and technology trends playing out across the data center and AI that continue to work in our favor.
As AI adoption accelerates and inference use cases expand, workloads are becoming more persistent and context-rich, and performance is increasingly defined by how efficiently data can be stored, accessed, moved, and secured. To support these workloads, AI infrastructure is becoming more complex and heterogeneous, combining a mix of traditional and AI server platforms to support orchestration, data management, and real-time execution at scale. At the same time, the expansion of inference, and particularly agentic AI with continuous reasoning and multi-step workflows, is driving more always-on activity and placing even greater demands on memory capacity, bandwidth, latency, and power efficiency. Together, these trends are driving n ew memory and connectivity architectures to support purpose-built solutions across a wider range of use cases and form factors. This increases our opportunities for richer chip content and broader adoption of our industry-leading IP, reinforcing our position for sustainable long-term growth.
Now let me turn to our quarterly results, starting with our chip business. Our performance reflects strong execution and ongoing leadership in our core DDR5 LPDDR chips. We delivered product revenue of $88 million in Q1, in line with our guidance and up 15% year-over-year. Looking ahead, we expect to deliver double-digit product revenue growth in the second quarter. We continue to see increasing customer adoption of new products and remain well positioned to support the ramp of next generation platforms as they enter the market. We continue to execute on our strategy of delivering comprehensive industry-leading chip solutions to address growing customer and market requirements.
As I mentioned in my opening remarks, we recently expanded our product portfolio with the introduction of our chipset for JEDEC standard LPDDR5X SOCAMM2 modules, building on the same signal and power integrity expertise we have applied across multiple generations of DDR. This chipset is the first offering in our roadmap of LPDDR-based server module solutions and includes new voltage regulators as well as the SPD Hub to support reliable power efficient server class operation. As part of that roadmap, we are actively working with industry partners on the definition and development of LPDDR6-based SOCAMM2 solutions, which would offer a natural upgrade path for future generation AI platforms.
As AI server architectures diversify to address varying performance, power efficiency, and form factor requirements, some platforms are now leveraging LPDDR-based memory. While LP memory offers attractive power characteristics, it was originally designed for mobile environments with very short signal paths and tight power margins, making reliable deployment in server systems inherently challenging. The SOCAMM2 addresses these limitations through a compact CPU-proximate module architecture with optimized signal routing and localized power management to enable LPDDR modules to operate in server environments. The Rambus SOCAMM2 chipset enables power efficient, reliable operation of up to 9.6 Gbps in a compact module form factor. As LP-based server modules scale to higher speeds and bandwidth in future generations, they will require increasingly sophisticated interface power and control functionality.
This progression is similar to what we have seen in DDR-based server modules and reinforces our opportunity to extend our roadmap of high value chip content across memory types in the future. As I mentioned previously, the ongoing expansion of AI is driving demand for a broader range of memory types and form factor. To meet these needs, we continue to build on our leadership solutions in DDR5, including chipsets for RDIMM and MRDIMM, and selectively expand our roadmap of novel solutions as they begin to play a complementary role in heterogeneous systems. With active engagement across customers and ecosystem partners, we are helping shape next generation server modules, reinforcing the opportunity for richer chip content and sustained growth. Turning now to Silicon IP, we saw strong customer traction in the first quarter with continued design wins at Tier 1 companies and growing engagement across our portfolio.
We remain focused on delivering industry-leading premium IP that enables differentiated solutions for AI in the data center, including accelerators and networking chips across a wide range of architectures. There's increasing momentum for custom silicon in AI, especially among hyperscalers, as they tailor hardware to their own software stacks and deployment needs, optimizing for performance, power, efficiency and total cost at scale. This is driving an accelerating pace of design and expanding demand for value-added IP to support memory bandwidth, advanced connectivity and security. During the quarter, we saw growing traction for our value-added PCIe retimer switch IP to support increasingly complex AI systems across scale up and scale out environments. We also expanded our memory IP portfolio with the introduction of the industry's fastest HBM4E controller, setting a new benchmark for AI accelerator memory throughput.
In addition, we launched a new network security engine designed for Ultra Ethernet to protect distributed AI clusters. All of these IP offerings are in great demand and further strengthen our position as a critical enabler of next generation compute and connectivity solutions for AI infrastructure. In summary, we executed well in the first quarter. We delivered solid results and expanded our offerings for both chips and IP to extend our leadership in our core markets. As we look ahead, Rambus is well positioned to capitalize on the mega trends in data center and AI. Our sustained technology leadership, disciplined execution and increasing traction across our portfolio of leadership products will continue to fuel our results. With that, we expect strong growth in 2026 and I'm confident in our long term trajectory. As always, I want to thank our customers, partners and employees for their continued trust and support.
Now I turn the call over to John to walk through the financials. John?
Thank you, Luc. I'd like to begin with a summary of our financial results for the first quarter on slide three. We delivered first quarter revenue and earnings in line with our guidance with solid contributions from each of our diversified businesses. We also continued our strong track record of cash generation. This performance reflects the continued strength in our business model. Our strong balance sheet and disciplined capital allocation enable us to invest in growth initiatives while returning value to shareholders. Let me now provide you a summary of our non-GAAP income statement on slide five. Revenue for the first quarter was $180.2 million, which was in line with our expectations. Royalty revenue was $69.6 million, while licensing billings was $70.8 million.
The difference between licensing billings and royalty revenue mainly relates to timing, as we do not always recognize revenue in the same quarter as we bill our customers. Product revenue was $88 million, representing 15% year-over-year growth, driven by continued strength in DDR5 products and ramping new project contributions. Contract and other revenue was $22.6 million, consisting predominantly of Silicon IP. As a reminder, only a portion of our Silicon IP revenue is reflected in contract and other revenue, and the remaining portion is reported in royalty revenue as well as in licensing billings. Total operating costs, including cost of goods sold for the quarter, were $104.6 million. Operating expenses of $69.9 million were up sequentially due to seasonal payroll related taxes in connection with equity vesting.
Interest and other income for the quarter was $6.9 million. Using an assumed flat tax rate of 16% for non-GAAP pre-tax income, non-GAAP net income for the quarter was $69.3 million. Now let me turn to the balance sheet details on slide six. We ended the quarter with cash equivalents, and marketable securities totaling $786 million, up $24 million from Q4 2025, with strong operating cash of $83 million, partially offset by $38 million in taxes paid on equity vesting and $17 million in capital expenditures. We increased our inventory balance by $14 million during the quarter and expect to continue building inventory strategically in the second quarter. Our strong balance sheet gives us the flexibility to increase inventory to support our product revenue growth and manage through potential supply chain constraints.
First quarter depreciation expense was $8.5 million. Free cash flow in the quarter was $66.3 million. Let me now review our non-GAAP outlook for the second quarter on slide seven. As a reminder, the forward-looking guidance reflects our best estimates at this time, and our actual results could differ materially from what I'm about to review. In addition to the non-GAAP financial outlook under ASC 606, we also provide information on the licensing billings, which is an operational metric that reflects amounts invoiced to our licensing customers during the period, adjusted for certain differences. We expect revenue in the second quarter to be between $192 million and $198 million. We expect product revenue to be between $95 million and $101 million, a sequential increase of 11% at the midpoint of guidance.
We expect royalty revenue to be between $72 million and $78 million and licensing billings between $76 million and $82 million. We expect Q2 non-GAAP total operating costs, which includes cost of sales, to be between $110 million and $114 million. We expect Q2 capital expenditures to be approximately $14 million. Non-GAAP operating results for the second quarter are expected to be between a profit of $78 million and $88 million. For non-GAAP interest and other income and expense, we expect $7 million of interest income. We expect non-GAAP tax expenses to be between $13.6 million and $15.2 million in Q2. We expect Q2 share count to be 110 million diluted shares outstanding. Overall, we anticipate Q2 non-GAAP earnings per share to range between $0.65 and $0.73. Let me finish with a summary on slide eight.
In closing, we delivered solid results in line with our objectives, driving ongoing profitability and cash generation. Our diversified portfolio remains a core strength, with each of the businesses contributing meaningfully to our performance. Our patent licensing business continues to deliver consistent, predictable performance, supported by the long-term agreements we have in place. Our silicon IP business is well positioned, driven by critical interconnect and security technologies addressing the accelerated demand for AI solutions. Our product business grew 15% year-over-year and is poised for sequential growth in the second quarter. We remain focused on delivering long-term shareholder value with year-over-year revenue growth in 2026. Before I open the call up to Q and A, I would like to thank our employees for their continued teamwork and execution. With that, I'll turn the call back to our operator to begin Q and A.
Can we have our first question?
Thank you. Ladies and gentlemen, if you have a question, please press star one on your touch-tone phone. Your first question comes from the line of Kevin Garrigan with Jefferies. Please go ahead.
Yeah. Hey, team. Thanks for taking my questions. Can you just help us think about your product revenue into the June quarter? You know, last quarter you discussed the low double-digit revenue impact from a one-time OSAT issue. I think, you know, we may have been expecting a larger sequential increase for June, just kind of given how strong demand has been. Can you just walk us through the drivers for the June quarter product revenue and, you know, why the recovery might be a little bit more measured?
Hey, thank you, Kevin. Yes, sure. The first thing I would say is that, you know, the issue that we had talked about in the prior call is behind us. Everything is being resolved, and it's a question now for us to restabilize the supply chain, which we are doing, and we see a normalization of that supply chain. You know, it is behind us. The revenue for Q2 is guided at 11% over Q1. That's the right trajectory. We continue to expect to grow sequentially after that, in an environment where our footprint continues to be very strong. You know, I mentioned in the earlier call that it was older generations of DDR5.
The market is transitioning from Gen 2 to Gen 3, which is a good catalyst for us. I would say, you know, we met or we guide, you know, to double digit in the second quarter. We met, you know, what we said we would meet on the operational strain in Q1, and we will continue to grow sequentially quarters after that. We don't see any, you know, issue with the demand, and we don't see any more issues with the quality issue that we had in Q1. We feel quite confident for the rest of the year as the market moves from Gen 2 to Gen 3.
Okay, great. Just as a follow-up on your LPDDR5 SOCAMM2 server module chipset, when would you expect to start seeing revenue from this chipset? What kind of milestones should we watch to gauge traction?
You know, I would see this as having a very good strategic impact at this point in time. The financial impact in the short run this year is gonna be very minimal, just because the volumes are very small for this type of solutions. You know, as a reminder, it only addresses a very small portion of the AI workloads. The volumes are small. The content is small as well. Strategically, I wouldn't put it in a model for 2026, but it's strategically very, very important because the risk trend to look at LPDDR in the server environment in the long run. LPDDR still has issues in order to address the server requirements, but it also has traction, it has benefits. We see this as a stepping stone for us.
It builds on the fact that, you know, over the last few years, we have developed our product line as chipsets, so we have a whole chipset for the SOCAMM2. We have our own teams for power management development, and these are the two new chips that we are proposing for this, you know, for this solution. We see this as a stepping stone. It allows us to engage with other, you know, AI players in the industry, and we're working on next generation as well. I don't think that the financial impact is gonna be significant, this year, just given the volumes.
Okay, great. I appreciate the call, Luc.
Thank you.
Your next question comes from the line of Tristan Gerra with Baird. Please go ahead.
Hi. Good afternoon. A quarter ago, you highlighted shortages and sounded a little bit maybe not cautious, but muted on the growth opportunity, and you provided a fairly muted data center unit forecast. How are shortages of components potentially impacting your revenue this year? What are you seeing, you know, that's different now than a quarter ago? Given the outlook for D1 to remain very tight next year, you know, how should we look at your product revenue growth and specifically your RCD growth while excluding the new product layers that will be adding on to that from a year-over-year growth standpoint? In other words, would you expect the same type of growth next year year-over-year versus this year?
I understand you're not guiding for next year, but just wanted to get a bit more color on what you see on the market that potentially could constrain on your growth. Clearly, that's an issue for a lot of other companies as well.
Yeah. Thank you, Tristan. First of all, yeah, let me say a few words about the demand. You know, we do see demand continue to grow for standard servers, which is good for us with agentic AI in particular. We expect the server market to grow faster this year than last year. We model it at low double-digit growth because, despite the excitement around AI, there's also a large portion of the server market that is not AI related. We do see demand growing on the server side, which is really a good catalyst for us. As we said last quarter, we're watching the situation with supply, especially on the back end.
Certainly since last quarter, the situation has not improved. You know, we're working with our suppliers, but the lead times, you know, are long. There is tension on the back end. We take this into account when we forecast our business. This is one factor. You know, the other factor that affects or that comes into play when we forecast is the timing of launch of new platforms in the market. You know, as you know, it's been the case in the past for us, you know, the launch of our new product depends on the launch of new platforms in the market, and that's a dependency that we have.
We don't see the situation as materially different than what we saw in Q1. From a supply standpoint, things have not improved. We expect the supply situation to be tight going into 2027 as well when we talk to industry players.
Okay, that's useful. Then as my follow-up question, any additional color on the MRDIMM opportunity? I know you've talked in the past about, you know, some very initial shipments late this year, specifically with inferencing. Any additional color as to, you know, where it could be in terms of revenue in 2027? I think you've talked in the past about your expectation that you probably fully realize that $600 million TAM for MRDIMM by 2028. You know, what should we be looking at for next year, kind of, in between? And what's really driving that? What's going to be driving the demand? Is it going to be mostly inferencing? And any additional color you may have, you know, beyond what you've said in the past on, you know, customer interest, you know, for this technology and where it's going to go.
Thank you, Tristan. First, you know, we continue to make progress in the launch of these products and the interaction with our customers on this MRDIMM. We're excited by the opportunity for the reasons we've always talked about, larger capacity, larger bandwidth in the same ecosystem, so the adoption is easier. The main, I would say, factor affecting the ramp of our MRDIMM is gonna be the timing of the launch of the platforms from Intel and AMD in particular, where they do have this capability attached, you know, in the next generation platform. We continue to see the ramp starting in 2027 in earnest. A SAM at this point in time, you know, which we feel is valued at about $600 million.
You know, as I keep saying, you know, the SAM, once the products are in the market, you know, and we get feedback and the market gives us feedback, we're gonna have a much better view of that SAM. At this point in time, this is the right number to keep in mind.
Great. Thanks again.
Thanks, Tristan.
Your next question comes from the line of Aaron Rakers with Wells Fargo. Please go ahead.
Yeah. Thanks for taking the questions. I guess kind of just building off that last question first. You know, when you kind of think about the $600 million, you know, incremental opportunity around MRDIMM, I can appreciate that, you know, there's a lot of unknown variables at this point, but I'm just curious, as you rolled up that expectation, what assumption are you making in terms of attach rate on AMD Venice and Diamond Rapids at this point? You know, how might that evolve? I mean, I would assume that you're being rather conservative on that attach rate at this point. Then also on that, how do you see CXL starting to play out?
You know, at this point in time, we modeled, you know, a lower attach rate, as I said. You know, in my experience, until a product is in the market, it's hard to make those models, you know, more significant. There are a lot of variables coming into play. As we just said, the most important one is the timing of rollout of these platforms in the market. There's also, you know, the whole situation with, you know, DRAM pricing and the, you know, the prices of modules and how, you know, our customers are going to make the decisions between the combination of modules they wanna have in the current, you know, memory cycle environment.
We model, I would say, a conservative percentage, you know, for MRDIMM at this point in time. You know, ramp will start when the platforms ramp in the market, and that's when we're gonna have a better view.
Any thoughts on CXL?
Oh, sorry, I missed the second part of your question. I'm sorry, Aaron. CXL, you know, we do have very good traction on our IP business. We are not planning to, you know, launch a semiconductor product at this point in time. You know, we do have this on our shelves, if you wish, as we designed one a couple of years ago. But we do see with agentic AI, we do see demand for, you know, standard DIMMs and MRDIMMs, you know, as being the main beneficiaries of that, and that's where we will continue to focus our attention.
Yeah. One final quick one. When you guys talk about the opportunity to grow sequentially in the product revenue, you know, into the back half of the calendar year, I'm curious if you were asked about seasonality in the second half versus first half, if there's anything that changes your views, maybe relative to the last couple of years on, you know, I think you've seen some decent growth second half versus first half. Thank you.
Yes, thanks, Aaron. That's a good observation. We actually do see, you know, second half shaping out slightly different than the first half, you know, better growth in the second half. You know, a lot of times it has to do with the launch of new platforms. You know, they typically hit the market if they are on time, you know, in the second half of the year, and that's where you have, you know, more products there. But even if you look at the first half of this year at the midpoint of our guidance with Q2, and you look at the first half of last year, you know, we're still growing, you know, close to 18%.
You know, the first half, despite our issue in Q1, is still much higher than the first half of last year. We believe the second half is gonna show growth. We do see some seasonality, and typically our second half is stronger than our first half.
Yep. Thank you.
Thank you, Aaron.
Your next question comes from the line of Gary Mobley with Loop Capital. Please go ahead.
Good afternoon, g entlemen. Thanks for taking my question. If I take the sum of your license billing and your contract and other revenue in the first half of this year for the results in the guide and compare that to the same period last year, it looks like you're generating some abnormally strong growth. Is that due to any sort of variance in the patent licensing, or should I take this to mean that your silicon IP business might actually be running north of $150 million annually right now?
You know, thanks, Gary. We can see some quarter-to-quarter variations in these two categories just for the nature of the business. I would say that on underlying this, we see very good traction on our silicon IP business. Actually AI has an impact on our silicon IP business, which is also very positive as people who developed custom solutions for AI are looking for new interfaces and new security solutions like the ones I mentioned in the prepared remarks. We do have very good traction on the silicon IP business, and we continue to expect this business to grow 10%-15% a year, based on that. Our other business, our patent licensing business, it can also be changing from quarter- to-q uarter.
You know, we do renew agreements on a regular basis. Sometimes these agreements are, you know, structured in different ways, depending on the customers and what they wanna do. We have some strong quarters, some quarters that are not too good. On average, you know, this business continues to be stable at $220 million. I would say, I would not, you know, pay too much attention on the quarterly split, you know, on these revenues, but the fundamentals are really good. What I would add to this is if you look at our patent licensing business, our silicon IP business, or our product business, they all benefit from, you know, what's happening in the memory subsystem area.
You know, they all benefit from AI and the move from AI to AI inference. That gives strength to our results, you know. When we have a challenge like we had last quarter on the product line, then we have these two other product lines also that allow us to meet our numbers.
Okay. Thank you. I just wanna follow up. I want to ask about CPU roles in AI-optimized servers. Like, there's been a lot more noise recently indicating a higher ratio of CPUs to GPUs in AI-optimized servers driven by agentic workloads, and you sort of hinted to that. To put this into a question, I'm curious if, you know, we've moved to a point in time where we might see a one-to-one ratio of CPU to GPU, does this alter your view on the growth rate of your SAM for your product revenue or the size of it?
We are excited with you know where the market is evolving with agentic AI and inference. If you look at the types of architectures, software architectures, hardware architectures that the inference requires, then you clearly see that the ratio between CPUs and GPUs is changing in favor of CPUs. Overall, that's a very good thing for us. It's just coming from the nature of you know what inference is or what agentic AI is. That's a good thing for us. Is it gonna be a one-on-one? Very difficult to say at this point in time. You know everyone is trying to optimize now the memory subsystem. You know everyone is trying to use HBM where it's really good, use LPDDR where it's really good, and use DDR and MRDIMMs where it's really good.
I would say that DDR and MRDIMMs will continue to be, you know, the workhorse of these, you know, inference AI solutions. The fact that all of these systems start to coexist, you know, HBM, DDR, LPDDR, is really good. You know, they all try to resolve a different part of the AI workload, and it plays to our strengths because this is what we've been doing for, you know, forever at Rambus. I would say that the move to AI inference and the move to agentic AI will change the ratio in favor of CPUs, and that's good for us.
Thank you. Appreciate it.
Thank you.
Your next question comes from the line of Sebastien Naji with William Blair. Please go ahead.
Thank you. Maybe my first question, I wanted to ask about the new SOCAMM products that you announced last week. Could you maybe just comment on what Rambus' dollar content looks like for each SOCAMM module, just across the different voltage regulators and the SPD Hub? Any unit economics you can give us?
You know, given the current competitive environment, you know, I stay away from giving, you know, pricing on these things. I would say that the content on, you know, on a SOCAMM from the standpoint of Rambus, you know, we have three voltage regulators and an SPD Hub, so the content is minimum. This is what I was saying, you know, earlier on one of the questions. I do believe that this is strategically important, for us because in the long run, LPDDR may play a larger role, especially in next generation LPDDR solutions in the data center. From a content standpoint, it stays minimal and the volume stays minimal. I would leave it there.
Okay. Okay, that's fair. And maybe just turning back to the RDIMMs. Could we get an update on the progress you're seeing with companion chips? How much revenue came from those companion chips in Q1? And then maybe just relatedly, how important is it for your silicon customers that they have all of these DIMM components bundled together coming from one provider versus having to put these together from different providers?
Yes. Thank you. John, go ahead.
Sure. The newer products, Sebastien, they're contributing low double-digit % of our total product revenue during the first quarter. We would expect it to be roughly the same in the second quarter as we see some growth in the overall revenue contribution from that part of our business.
Yeah. What I would add to this is that this is steady growth quarter- over- quarter. You know, you saw this, you know, in 2025, every quarter we had a slightly higher presentation. We continue to do that, and we expect to continue to do that for the second half of the year with, you know, with this. We expect maybe to exit the year as mid double digit of product revenues on, you know, coming from our new chips. Now to your other question, it is becoming more and more important for customers to, you know, have the whole chip set from one supplier, especially as the performance requirements increase. The reason has to do with interoperability.
You know, making sure that all of these chips on a module work well together at very high speed and very harsh environment is becoming more and more difficult to achieve. That's why our, you know, customers request, you know, us to have the whole solution and to have them go through these generational changes.
Makes a lot of sense. Thank you, Luc. Thank you, John.
Thank you.
Your next question comes from the line of Kevin Cassidy with Rosenblatt Securities. Please go ahead.
Yeah. Thanks for taking my question. During the quarter as you're building inventory, were there any orders that you had to leave on the table that you weren't able to book because you didn't have the inventory, but maybe some outside surprise?
No. You know, we've not been in that situation. There are few market dynamics that we have to anticipate. One is, as I said earlier, we do see supply tightening, especially on the back end, so we wanna make sure that, you know, if that situation continues, we have enough supply to supply our customers. The second thing that is happening is that you know, the fast transitions between generations. You remember we were talking about generation one moving to generation two. We indicated in the last call that you know, generation three is ramping very fast. We wanna make sure that on these new generation of products, we also have enough inventory because the ramps, you know, on the customer side can be quite steep, and we just don't wanna lose them.
Okay. I understand. Maybe even when you're using your balance sheet to build more inventory when Intel reported, they said they even were able to ship some previously written down inventory. You know, it seems like the demand for CPUs is so strong and also DRAM that maybe older generations will get a little bit of a revival. Is that anything possible or it sounds like you're saying everything's shifting to gen three very quickly.
From a demand standpoint, it's certainly the bulk of the demand for DDR products is shifting to Gen 3. What you're describing in terms of, you know, using inventory of old products to serve, you know, demand is something that we continuously do, you know, and look at, you know, as part of our, you know, inventory management processes.
Okay. Great. Thank you.
Thank you.
Your next question comes from the line of Mehdi Hosseini with SGI. Please go ahead.
Hi, Luc. This is, Bastien filling in for Mehdi. My first question.
Mm-hmm.
This is on LPDDR. SOCAMM2 chipset. Would you mind clarifying the content of the chipset? It seems that the solution consists of one SPD and three voltage regulators. Do you expect to add any PMIC content there? What does the pricing look like of the SPD and the voltage regulator relative to the DDR DIMM? I have a follow-up.
Sure. Yes, on the SOCAMM solution, we have one SPD hub and two types of voltage regulators. Three voltage regulators in total, but two types. One 12-amp regulator and two 3-amp regulators. That's the content. As I said, the content is minimal. You're talking about PMIC. There's no power management IC per se. You know, that function is done by the voltage regulators in this generation of products. That's why we say it's very strategic for us.
The way we look at this is that when LPDDR6 is available, you know, that LP memory will offer even more speed and even more, you know, power capabilities, then it will require, you know, possibly more complex, you know, chips for, you know, power management, and we will work on those. One can imagine as well that, you know, as the market evolves, you know, in the longer run, the market will probably need as well the equivalent of, you know, of RCD in the long run. This falls exactly in our strategy, and that's why I'm talking about the stepping stone. We wanna make sure that we are early in these new technologies. They do not cannibalize the old technology. They are complementary to them.
In the long run, they have the potential to grow quite nicely, and they build on strengths that we have, which has to do with signal integrity and power integrity. Now, in the short run for the SOCAMM2 and LPDDR5X, you know, as I said, the volumes and the content, the dollar content is gonna be very low. That's a very interesting and strategic stepping stone for us, in that area.
Thanks, Luc. That's really helpful. I guess my second question is on DDR5. How should we think about the timing of the ramp of Gen 4 and Gen 5 as we go to higher volume manufacturing?
Gen 4, you know, is going to start to ramp this year, but Gen 4 is a kind of a niche generation, if you wish. It doesn't have the same traction as, you know, Gen 1, Gen 2, Gen 3, or Gen 5. I think everyone is now waiting for Gen 5. We're going to start shipping products that correspond to Gen 5 towards the end of the year. But just like for the MRDIMM, Gen 5 is completely dependent on the timing of the ramps of the next generation platforms from Intel and AMD. This is where they're going to be adopted. That's why, you know, we see initial volumes this year, but the bulk of the volume, just like for MRDIMM, is gonna start in 2027.
Got it. That's very helpful. Thank you, Luc.
Thank you.
Your next question comes from the line of Mark Lipacis with Evercore ISI. Please go ahead.
Great. Thanks for taking my question. A question on the DIMM attach rate. Is it different for CPUs used to perform orchestration in agentic AI versus CPUs used in standard servers versus CPUs that might be put next to the GPUs and the XPUs and the custom ASICs. Should we think about the attach rates differently for these?
It's a very good question. Very difficult question also, Mark. I would say that the way we look at it is if you look at inference and agentic AI, the functions that have to be performed by these, you know, standard CPUs are closer to standard CPUs. I think the highest attach rate that you would find is really close to the GPUs HBM platforms. That's where you have the heaviest loads, if you wish, for these CPUs. That's how at this point in time I would compare it. I would say if you take a DGX box, you know, with GPUs and HBM, then the CPU there are the CPUs that use the most memory in terms of capacity and bandwidth.
I would say that when you go to inference, then you're, you know, it's probably a little less, but it's difficult for us at this point in time to model that.
Mark, your line is open.
Hi. Sorry, I guess my phone dropped, and I don't know if my question came through. Luc, I was wondering, should we think about the DIMM attach rate differently for CPUs that would be used in orchestration for agentic AI versus CPUs used in standard servers versus CPUs that are used for inferencing, that get put next to the GPUs and the ASICs and the XPUs? Is there different, you know, density there for the DIMMs?
It's a very good question, Mark, but a very difficult question to answer. I would say, you know, the way we look at it at this point in time is that the highest use of memory capacity and bandwidth really resides, you know, close to the GPUs and the GPU HBM clusters, if you wish. That's where, you know, you have the most need for very high capacity and very high bandwidth, which on average, you know, could be higher than what we found, you know, in inference and other solutions. You know, we have not modeled that at this point in time. It's hard to model.
We do see in aggregate the fact that, you know, inference is being added to training as a very good traction for, you know, the use of, you know, standard DIMMs or MRDIMMs in general. The attach rate is difficult to model at this point in time.
Gotcha. Okay. That's fair enough. Then the tightness in the back end that you're noticing, is this, do you know or can you explain what the cause of that is? Is that because of, you know, the idea that a lot of the backend happens in Southeast Asia and they procure a lot of energy from the Middle East? Is that it, or is it capacity? Is it more like just the whole industry is in a great recovery time and the capacities utilization rates are really ticking up? Do you have a sense of the cause of the tightness in the back end?
There, there's a couple of reasons. One is the demand, especially in the data center, you know, has become very high, you know, recently. There's increased demand there. The second reason is that a lot of semiconductor suppliers have moved their backend supply chains away from China to other countries in Asia, and that has put a strain on the total capacity, you know, of these backend suppliers. It's the combination of the two. We've not seen an effect yet, not yet of the war. There are discussions about some basic elements like gas that are going to be affected, but we don't see this yet.
The main reason at this point in time is increased demand, especially in the data center, combined with, you know, semiconductor companies moving their supply chains outside of China.
Okay. That's really helpful. A last question, if I may. If you think about your market share in this year, are you of the view that you are a share gainer or you keep share flattish or down? Like, what is your view on your ability to gain share? Thank you.
Yeah. We continue to gain share. You know, 2024 to 2025, you know, we were. We exited 2025, we were at, you know, mid-40% share. There's no indication that, you know, we're not going to continue on that trajectory. You know, this year the market is really at a high level transitioning from Gen 2 to Gen 3, and our footprint in Gen 3 is really, really good as well. You know, there's no sign of any erosion of the share. You know, if we add the other components, then we'll grow faster than market because we add content as well to what we ship to the market. So again, we're very pleased with, you know, where we were in 2025. As you know, Mark, we tend to talk share on a yearly basis.
You know, they can fluctuate from quarter to quarter, but we don't see any sign of erosion of our share going into 2026.
Gotcha. Very helpful. Thank you.
Thank you, Mark.
At this time, there are no further questions. This concludes the question- and- answer session. I would now like to turn the conference back over to the company.
Thank you, everyone, who has joined us today, for your continued interest and time. We look forward to speaking with you again soon. Have a good day.
Thank you. This now concludes today's conference.