Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

Investor Update

Nov 4, 2022

Operator

Before we get started, if you are a member of the press or media, please disconnect at this time. This is a restricted line. Any unauthorized party in this meeting or any unauthorized use of the information communicated in this meeting is subject to prosecution to the fullest extent of the law. Any unauthorized person, including the media, that is on the line at this time, please disconnect. Please note, today's call is being recorded.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Morning, everybody. My name is Gary Mobley. I'm one of the semiconductor research analysts here at Wells Fargo Securities. Joining me is Aaron Rakers, who also covers semiconductors here at Wells. With us today we have our two guests. We have Steven Woo from Rambus, and we have Desmond Lynch, who you all may know is the CFO of Rambus. The topic here today is a discussion about CXL, about the market adoption of the technology, what it means for the industry, what it means for Rambus. Before we get into that, I wanted to turn it over to Steven so that he can introduce himself and establish a context for our discussion.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Thanks very much, Gary. Hi, everyone. My name is Steven Woo. I'm a fellow and a distinguished inventor here at Rambus. I've been with the company for more than 25 years, and I've done a number of roles within the company. I've worked in technology development and architecture and product planning and strategy as well. I worked my way back around to the research side of our organization, and today I lead a team of senior architects looking at some of the important technologies that are shaping the future of things like the data center. I'm very happy to be here to get a chance to talk about CXL and its impact in the future.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Appreciate that, Steven. Steve, I should say. Perhaps you like to be called. I know you have some slides that you can perhaps share with us and I guess as an introduction to the slides, I was hoping that maybe you could, you know, give us an overview of the technology. I know there's a lot of buzz out there relating to CXL. Perhaps if you can start out by telling us what all the excitement is about.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Sure. Yeah. I think that's a great place to start. You know, CXL is a really interesting technology, and there have been these challenges that have been brewing in the data center for the last 10, 15 years. On this slide here, you see what three of the really biggest challenges are, and they're only gonna get worse going forward. The first on the left is that companies like Intel and AMD, they've been great at delivering more cores per CPU. The challenge that happens is every one of these cores wants its own memory bandwidth and its own memory capacity. They're all running different programs, and they all have different requirements.

The challenge that comes up is you need to keep scaling that memory bandwidth and capacity so each new core that gets brought in has some resources that it can do its own work in. What you'll see on that slide is in that little graph in the green line it's the number of cores per CPU. It's going up and to the right. If you look at the orange line which is the memory bandwidth per core that's available you see that it's kind of sloping downward to the right unfortunately which means every new core is getting less memory bandwidth than in previous years. Just the way programs work you know you always want the same or more resources. This is a challenge.

You know, the question is, how do you continue to give the resources that these cores want? In the middle there, you see, what's this classic picture of the memory hierarchy. You have different kinds of memory and storage that are available to your processor cores. There's on-chip caches, which are very fast, and then you have direct attached DRAM, which is a little bit slower, but you get a lot of it, and it's great because it can hold lots and lots of data. Then, you know, once you get past that, there's this huge gap in terms of latency and bandwidth. You know, after your locally attached memory, you have storage. You have to be very careful as a programmer.

You don't wanna be executing out of your disks because it's so incredibly slow and it's very low bandwidth as well. You can get a lot of it, but from a performance standpoint, it's really just not what you wanna be doing. The question is, you know, as our datasets get larger and as our programs get bigger, how do you either avoid that gap or really, you know, the current view is how do you fill it? I need something else that can go in there. Then the third thing on the right is an issue of memory stranding. Now, memory is a very important resource in your system.

It's, you know, it's very useful and it's fast compared to anything else that's outside of the processor. The challenge is that the way servers are put together today, they're put together in kind of these fixed ratios where you buy a server and it's got a couple of processors and some fixed amount of memory. It really forces you to think, "Well, you know, I have to size things in a way that it's the worst case workload I'll ever deal with." Which means sometimes in the average case, you're not really using all that memory. It would be really nice if there was a way to have more of a composable infrastructure. As my jobs come in, it'd be nice to be able to just marshal whatever resources I need for that job.

In some cases, I might be borrowing them from someplace else, and I can just give them back when I'm done. In other cases, I might just have enough that's in the box. Something that's a little bit more tailorable and something that's a little bit more dynamic can really help improve the cost of operation. CXL is really the result of understanding what some of these limitations and challenges are and you know, kind of finding ways to address them. Let's take a look at you know, how CXL does some of these things. I mentioned that big gap in the hierarchy, and you can see here there's kind of these three new levels that get inserted into that gap.

After your direct attached DRAM that goes directly on the CPU, you can now have direct attached CXL DRAM. You can take the same kind of DRAM chips, and you can connect them through another kind of interconnect, the CXL interconnect. It's a little bit longer latency than the direct attached native DRAM, but it gives you this great expansion capability in both bandwidth and capacity. A little bit beyond that is this notion of pooling memory, where we can have appliances that are just full of memory, and you can kind of treat them like a library, where if a processor runs out of its own local memory and it needs more, it can go to this appliance and kind of borrow.

It can provision or check out some memory and use it for the duration of a job and then give it back so some other job can use it in the future. That helps you know improve both the utilization and the operating costs. You know kind of a little bit below that is this notion of switch or fabric-attached memory, where we can have these appliances and they can be accessible through a fabric, kind of like a network just for memory. What that does is it gives you a much broader expansion capability and a much wider sharing capability.

These are the ways that CXL can help us introduce new tiers into the memory hierarchy and fill that massive chasm that exists today between DRAM and storage. You know, let's take a look at some of the benefits now. Here's a classic server that you see, you know, in the image here. You have a few different kinds of processors. You have a CPU. Sometimes you have a Smart NIC, and you can have accelerators like AI engines and things like that. The CPU's got its own directly attached DDR memory. In the case of AI, the AI engine will have its own high-performance memory as well. You can kind of see there's not a lot of opportunity to expand across a wide range of capacities and bandwidths here.

Things are kind of fixed. If you start thinking about what CXL can do for you, what you see is there are these new attach points now for memory. You see these CXL memory modules attached to both the CPU and the AI engine. What it allows you to do is, if you need it, you can expand both the amount of memory capacity and the memory bandwidth that's available to both of those engines. The bandwidth is really directly related to the number of links over which data travels. It's kind of like if you had a freeway, like a four-lane freeway, and you went to a five-lane freeway, you can now handle more traffic. It handles more bandwidth.

It has a nice capability, you know, in addition to giving you that capacity, to giving you the bandwidth that these engines really want. What it also does is, it, you know, if you have a choice to add some memory, you can, you know, because you can add it across these extra links, then what it does is it scales the amount of bandwidth along with the capacity, and that's important. As we add more cores, each one again wants its own memory, bandwidth, and capacity. Being able to scale both of those as the core counts goes up makes it very, very useful. CXL also offers this other really interesting capability. You'll see on the CXL memory module, there's something, you know, kind of blue here, and it's a CXL memory controller.

What that controller chip does is it gives you what's called media independence. When the CPU talks to this module, it doesn't really know what kind of memory is here. It just passes a request onto this blue chip in the center of the module. That chip can kind of talk natively to whatever kind of memory is on the module. That media independence means I can put lots of different kinds of things there. I could put like a storage-class memory. I can put many different kinds of DDR back there. That's as the industry thinks about new kinds of memory in the future, and there's a possible attach point now and a good method for being able to do that.

The other thing that's really kind of interesting about it is if you look at the kind of resources on a chip that are required to talk to a DDR memory, there's quite a lot of pins, and then there's some silicon real estate for like a controller that will talk to the DDR. Excuse me. The interesting thing about this is with CXL, you don't require as many pins now to talk to something that's externally connected. It becomes more pin efficient, and it is a more general interface. If you decide in your system that memory is the thing you need, great. You can through this narrow interface add memory modules.

If you decide that there's something else you need to talk to, maybe it's another accelerator or maybe it's another kind of device, maybe it's a storage device or something like that, you can use these CXL interconnects to talk to that as well. It turns out that memory is often talked about because it's such a precious resource and it's such a limiter in systems. CXL is a more general kind of interconnect that allows you to do really different kinds of things here.

You know, once we get kind of beyond the direct attach capabilities that CXL gives you. There, there's this really neat capability called pooling that I mentioned before, where you know, the vision for the future is that you can have a bunch of compute nodes like dual socket servers, and those are shown in kind of this gold up at the top here. You see a bunch of these compute nodes. What we have are a bunch of memory nodes on the bottom. What you can do with these interconnects is the compute nodes can be, they can have connections to multiple of these memory nodes, and as they need resources, they just borrow them from these pools. They check out the resources and then check them back in.

It's nice to be able to do something like this because, you know, within a data center, we've seen that the number and low variance of the kinds of workloads has grown dramatically. If you go to, you know, Microsoft Azure or Amazon AWS, you can just see the many, many different kinds of computing instances that you can rent. The data center folks are having to keep up with this tremendous diversity of workloads. Something like this allows them to do that and also have a good kind of operating cost as well.

Once this is in place, then what you can do is, you know, you can actually start thinking about not only having direct connections between these compute nodes and memory nodes, but you can have switches in between them, just like we do in Ethernet today. You can start to have now fabrics and things that allow memory to be connected to compute resources, and you can start to scale in really dramatic ways now. These resources can be shared very widely and very broadly across the data center, and that'll continue to give you, as a user, even better use cases where you can have lots and lots of memory at your disposal. From a, you know, from a data center standpoint, it gives you the option to support even wider ranges of workloads.

Just, you know, what we're seeing is kind of this ability to continue to address the needs for more memory and the ability to have nice scalable solutions.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Well, thank you for that overview. I forgot to mention something when we started. If you have a question, you can email my colleague, Aaron Rakers at aaron.rakers@wellsfargo.com. I wanted to ask you, Steve, you know, how does the industry plan to support CXL going forward?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah, it's a really good question. You know, what we're seeing is this very broad industry support. There's a consortium which is always a good first step. You know, what we're seeing is really just tremendous support top to bottom in the supply chain and the value chain. Companies from you know, cloud service providers and you know, and traditional server manufacturers, they're all supporting CXL, they're all part of the consortium. Of course, you need really good support in the processor side. Companies like Intel, AMD, you know, Arm, they're all part of the consortium as well, so. They're even publicly talking about it.

You know, they're saying, "Well, my processor," you know, in the case of Intel, they're saying Sapphire Rapids, which is the kind of the next big architecture evolution on their server product line, is supporting CXL. We're getting similar messages from other companies. That kind of support starts to trickle down through the value chain. You see companies that are component manufacturers and suppliers are supporting it. Memory manufacturers are supporting it. Companies like Rambus through Silicon IP and products are supporting it as well. You know, we see this great top to bottom kind of support. Honestly, this is what it should look like when you're looking at a new technology that needs to come out. It takes quite a lot of companies working together.

In terms of, you know, kind of how the industry's looking at a rollout of the technology. You know, there's a longer-term vision to get to pooling and to get to sharing and switching. You know, it takes multiple steps to get there. The early versions of the standard CXL 1.1 really, you know, that's kind of an initial deployment of the CXL standard. What you see is the introduction of the different use cases for CXL. It's a very, in my opinion, a very smart rollout strategy. They're leveraging, you know, existing physical technology like PCI Express Gen 5.

Something you know, and you know how to work with and something that you're layering on top, which is new, which is CXL and some new use cases. Following that is really the deployment of CXL 2.0. This is where you'll see, you know, kind of a you know, again, growth and some neat new capabilities. One of them is memory pooling, like I mentioned. And also, you know, we're seeing more capability that's being built in for security as well. You know, a lot of concern that when you're having shared infrastructure among many people, that you need to have support for security in that infrastructure as well.

Then, you know, the kind of larger scale-out that I showed with switches and things like that, you know, that comes with CXL 3.0, and it gives you faster interconnect. PCIe Gen 6 is, you know, really the target there. And also this thing called coherent memory is another kind of important capability that that CXL 3.0 will enable. Really what that means, it's kind of a fancy way of just saying. You know, these days everybody wants to take a big problem, and they want to break it up into pieces and run it in parallel across lots of engines. The AI guys do this all the time, where you have these big language models, and you train it across many, for example, NVIDIA training engines.

People wanna do that with the CPUs as well, and you have to have a way to share that memory. You know, now once you have a big problem that you break into pieces, then you have to have a way that they can communicate, and they can see, one processor can see another's updates. That's what coherent memory is all about. These are just some of the ways that CXL is being enabled, but it's got a very, you know, I think well thought out enablement plan and a very well thought out progression.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

To recap, the expanders, pooling, and switches, are those really kind of the three main product families that will essentially embody the CXL technology? Related to that, how is Rambus positioning in the CXL market?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah, that's right. I think it is a progression and then like you mentioned, you know, the expansion and pooling and then larger scale switching. Yeah, it's a growth of kind of the size and scale of what can be managed with CXLs. That's right. In terms of what Rambus wants to do, and so let me kinda talk about you know the various kinds of you know products and things that are kinda slated to be on the market.

You know, like you mentioned, you know, the first step is going to be that bandwidth and capacity expansion, and that's really, you know, gonna start taking off in that kinda 2024 timeframe. You know, pooling is gonna follow shortly after. You can kinda think of the bandwidth and expansion capability as the first step, and it helps to really enable and establish some of the software infrastructure and things that are gonna be needed for pooling. Then following that is when we'll have kind of the whole rack-level composability and large scale with switches as well. That's kind of the deployment timelines and where we're gonna see vectors of differentiation.

You know, people are gonna have a number of different offerings within each of those product categories, and they're gonna be differentiated by things like, you know, what is the bandwidth you're gonna be able to provide? You know, just like we see in memory modules, DIMMs today, there's gonna be speed grades. Some people will specialize in the higher speed grades and, you know, some will specialize in kind of the maybe, you know, lower speed grades but higher capacity, those kinds of things.

Latency is gonna be really important here. That controller chip that I pointed to, that blue controller chip on the CXL modules, the design of that is going to, there are a couple of different ways you can think about doing it. You know, some of those solutions are gonna really bias the latency over anything else. Scalability is another way people will differentiate, you know, how many compute nodes can talk to a memory node at a time. There are different trade-offs you can do there as well. Security, like I mentioned, you know, there are provisions in the standard for encrypting the links. You know, there's other things that you can actually do as well.

You can have a Root of Trust that really helps to attest to the authenticity of both the memory node itself and you know allowing access to certain data as well. Things like encryption, just to make sure the data that's there is accessible by only the people that are allowed to access it. There's some other things too, like you know power efficiency and you know some other things like reliability as well. There's you know a need to have these things to be very reliable since they'll have lots of data in them. Those are some of the you know kind of the main ways people will differentiate.

As we think about kinda how Rambus is really interested in kind of working in the area, you know, we've already got products today that are helping to enable the CXL infrastructure. You know, one example of that is we have in our portfolio of Silicon IP cores, it's really an industry-leading portfolio. Got a lot of great connectivity solutions, like PCI Express, and we also have controllers that are important, and security cores as well. Some of these fundamental building blocks, you know, we've got them already today.

Looking further out, you know, we do have product plans, and so, you know, our plan is to attack some of the, you know, some of the early use cases like memory bandwidth, and capacity expansion, and then eventually pooling. You know, we're getting a lot of good interaction with people in the cloud space and the server space as well as the memory manufacturers. Our plans are kind of aligned pretty well to that timeline I just showed.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Okay. I know my colleague, Aaron Rakers, is chomping at the bit to ask some questions, so I'm gonna turn it over to him for a little bit.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Sure.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah. Thanks, Gary. Thanks to Steve for doing this. You know, as Gary mentioned earlier, if anybody wants to ask a question, feel free to email me. It should be shown here on this slide. You know, we had written a note, actually, we published a detailed dive into CXL on Tuesday. You know, I guess maybe the first question I'll ask you is that, you know, as we think about, you know, the opportunity set, do you think that, you know, we're gonna see AMD Genoa launch next week with, I think, 1.1 support. You mentioned Sapphire Rapids. The 3.0 specification, you know, just that to me seems like the inflection point. I guess, first of all, would you agree with that, you know, notion?

If so, can you just like when do you expect to see or how do we as investors kinda gauge the progression of that standard and when maybe that becomes more tangible in the market as far as end product?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah, I think, you know, the end goal is to try and get to something like, you know, to get to the capabilities and the architecture that CXL 3.0 enables. It's one of those things where each one of the releases, you know, CXL 1.1, CXL 2.0, and CXL 3.0, as you see processors supporting those, are all important steps in getting to that end goal. I guess the way that I kind of see it is I think each one of those steps is very important for the industry.

You know, that end goal, of course, is important to get to, but it's something that needs to be built up to, and that's kind of why you see this plan for a more phased rollout of the specs and things. You know, I think that each one of them will see kind of an important change in the ecosystem and an important, you know, I kind of think of them as each inflection points really, as you get to, you know, and some of them may be bigger than others, obviously. Obviously, the end goal is to get to something, you know, kind of large and encompassing like CXL 3.0. I do think each of the other steps are gonna be important as well.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yep. I think on the architecture side, just so I may be level set for those in the audience that aren't as familiar, like, you know, this idea of the pin count, the real estate attributes of the DDR5 controllers or DDR4 controllers with the DDR5, you know, should point out, I think actually, you know, Genoa next week moves from 8 to actual 12 channels of support on that processor. Can you just help us understand, like, you know, CXL actually takes all of that controller functionality. So what's actually on the socket? Help me understand again that kind of migration of the controller off the socket and, you know, what exactly that means. I'm gonna throw the other part of that question in there. You mentioned coherency, right?

The importance of coherent memory with CXL 3.0. Can you just help us appreciate what's coherent mean relative.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Sure.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

To how things are deployed today?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah, yeah, absolutely. Let me take that first question about, you know, talking about kind of the interconnects and things like that. One thing that's really interesting about it is, like you mentioned, Genoa is gonna go from eight channels of DDR to 12. What that does speak to is both the need for more memory capacity and the need for more memory bandwidth. Those are things that are really important. It turns out that the DDR interface itself is pretty wide, and so there's a lot of pins associated with each of those interfaces.

If you look at the CPUs that are coming out, you know, when you go from eight to 12 memory channels, it's dominated by pins that are meant for kind of the memory interface. Now, what that means is every one of those processors, it pays for the pins and the real estate for the memory controller that's used to talk to the memory channels. They're wide and, you know, you are getting to a point where, wow, you know, it's getting harder and harder to add pins. The thinking is, with something like CXL, especially for memory, it's always good to have some, you know, locally attached memory.

If you can take those pins and maybe use them in a slightly different way, a CXL interface only takes a third of the pins as a memory interface. What you can then do is you can say, "Wow, you know, I can effectively put three times as many CXL channels in that same pin count." Now, what you have to do in order to do that is you end up having a module with this, you know, kind of smarter chip that has a memory controller in it. On the CPU side, you do have to have a digital logic core that's capable of talking the CXL protocol to this chip that's on the memory module.

What gets replaced is kind of the wide pin count interface of a memory interface, along with the memory controller. That memory controller moves into this blue chip, and you replace it with a smaller number of pins and then a smaller digital core that talks to this blue chip. Really what it does is it gives you kind of a more pin efficient interface, and it takes less real estate to implement that interface. It's a little bit more general too, so you don't have to put memory there if you don't want or you can dynamically reconfigure your server if you wanna use memory, you know, one month and then you decide you want an accelerator there on another month, that kind of thing.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

The coherent question.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah. For coherence is this interesting kind of thing. I'll give you an example. One example might be, imagine you had 10 billion numbers, and you had to add them all up. One way you could do that is you could have one CPU just kind of run through all 10 billion numbers and kind of add them all together. Another way to do it is you could break the problem up into 10 pieces, and you could have 10 processors, you can have them each add up a billion numbers, and when they each had their own kind of partial sum, they could agree to add them all together. You could see, you know, that could be done a lot faster because, you know, you're employing 10 engines.

Now, the challenge is when you get to the very end and everybody has their own partial sum, you have to agree on kind of where to put the global sum of all these numbers. You usually will say, there's this one memory location where the sum is gonna be, and what every processor is gonna do is it's gonna say, "I'm gonna take whatever value is in that location. I'm gonna add my partial sum to it." By the time we get all done, we get the sum total of these 10 billion numbers. What this coherence does is it ensures that when one processor does its update to the data, that other people can see it.

The worst thing that could happen is, you know, we each go to write in our value, and we missed some update that someone else has put in. Coherence is a very interesting way that the hardware inherently ensures that you don't miss an update. That's very important as people move to these multiprocessor sharing kind of applications. That's, you know, really the direction people have been going for a while. You see it in AI and you do see it today in processors as well. That's really what it's about.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah, that's very helpful. I'm gonna ask you the question I've gotten a handful of emails on, you know, already, and I think a loaded question that a lot of us investors are kinda thinking about. I think there was an article a while back talking about this idea of stranded memory, right? I think Microsoft was out there saying, you know, "Look, 25% of our Azure, you know, memory is quote-unquote stranded or underutilized." Does CXL, in a simplistic form and fashion, do you see CXL as expanding memory in the data center? Or do you see it potentially allowing data center customers to deploy less memory?

Steven Woo
Fellow and Distinguished Inventor, Rambus

It's a good question. You know, my firm belief is that it will end up expanding both the capabilities and the amount of memory that's needed in the data center. It's really, you know, I think historically, if we look back at some of these really interesting technologies that have been introduced in the data center that improve efficiency, what we find is that they end up increasing the market as well. One example of that is just multi-core processors. I mean, there was this concern that when you went from single core to dual core processors, that suddenly the number of servers sold would get cut in half.

Really what happened, you know, we historically, you can see what it did is it made more efficient computing infrastructure that people wanted more of, you know, and it kind of drove, you know, more hardware sales. Same thing with virtualization. You know, once the hardware got more efficient, you know, then people began to realize, "Hey, there's these other really neat use cases I can implement." That all, you know, that just allowed me to write better software, and it made the demand higher. My belief here is that, when you know, when you talk to software people, there is this concern that, you know, I'm not allowed to write the kind of software I really wanna write because I'm limited by the amount of memory that's available to me.

I think once you start to open that up, like we've seen time and time again, when you give software people more capable hardware, you get better applications, you get new use cases and use models, and really interesting things start to happen that just spur a higher demand for those use cases. My belief is that, just like we've seen many times over the last 20 years, more capable, more efficient, you know, and more rich hardware being provided is gonna lead to just higher demand and better software.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah. Yeah, and I agree. I think actually Micron, you know, as a reference, right, at their analyst day earlier this year said that, you know, CXL would be a $20 billion addressable market over the next, I forget the time horizon, but definitely progressing rapidly, as we move forward. I'm gonna throw one other kinda architectural kinda thought out there and wonder if this resonates with you, and feel free to refute it. When you start to go down the path of CXL 3.0, and you start to think about memory appliances and disaggregation and composability, is it not fair to also think about this as similar to the transition when we went from direct attached storage in the back of a server to a network topology, Fibre Channel SANs or NAS?

Does that coincide? Does that make sense to you as well, as we kinda think about that 3.0 path going forward?

Steven Woo
Fellow and Distinguished Inventor, Rambus

It does. I mean, what we've seen in general is that, you know, like you point out, you know, connectivity is its own benefit. You know, once you can have connectivity and shareability of a resource, and storage is a great example of that, you see that people begin to use it in new ways. Really the driver, I think, in both of those cases is that the amount of data in the world is continuing to grow rapidly, and you need some way to store that data and parse through it all to get some meaning from it. You know, one of the big drivers that we're seeing is, you know, digital data is doubling really every like 2-3 years, the amount of digital data in the world doubles.

We gotta search it all, we have to process it, we have to extract meaning from it. Being able to share that among multiple engines, you know, in a way like through switched memory that CXL 3.0 will allow, that's just gonna allow us to again have better insights and really better use cases around the data.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

I think in that context, should we be thinking about system-like architecture? Instead of like, DDR, looks like PCIe connected slots inside of a server, but actually taking CXL outside the server and actually having, again, those dedicated. Where do you see the appliance market evolving towards?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah, it's a good question. I think, you know, a lot of it really depends on the use cases that are put into place. There's a definite need for something like the appliances just based on things like you see, like stranding and the fact that people want these large footprint applications that just go well beyond the amount of memory that is inside a server. You know, that's just, you know, an application that might run on one or two cores, and, you know, the number of cores that are in the CPU is growing dramatically as well.

These appliances are really, I think, a really nice solution to address, you know, both the growing need for larger footprint applications and just the fact that the CPUs have more cores in them.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah. Gary told me to kinda keep rolling here, so I'm gonna keep rolling a bit. I appreciate you.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Sure.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Fielding these kinda questions for me. Rambus plays in that market in what form? I'm not as familiar as obviously Gary and Des, but anybody in the audience, Rambus' role in that kind of evolution of that product portfolio is what exactly? You know, how

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah. Let me. I've got a couple slides here that kind of talk about it. One is, we do have a long history in memory. You know, our company was founded to develop new memory solutions. Just some examples, you know, where we've been kind of at the front for many years, and we had the first kind of DDR5 buffer chipset. You know, we've been helping enable the industry on new memory technologies. We also do a lot of kind of Silicon IP, and so we specialize in really high performance. Things like for AI engines, things like HBM and GDDR, and, you know, we do a lot of controller and security cores as well.

We do have a strong focus on innovation, and that's where I work right now, is in Rambus Labs, where we do a lot of forward-looking stuff. We did a very interesting project about seven or eight years ago now, where we did look at pooled memory, and you can see kind of a picture of this Smart Data Acceleration board here, where we looked at some of the foundational technologies and changes that would be needed to support pooled memory. That's something else that, you know, we kind of lean on all these capabilities. Really just the, you know, just the kind of the DNA of our company is about looking at memory and kind of what some of the problems are. All of that kind of gets rolled into, you know, how we think about things.

You know, where we're positioned really well is, you know, you can kind of see here this gray box in the middle. It just represents, you know, the kind of things that have to go into a CXL controller chip. You know, what you have to have is some physical interconnect to the outside world, so these processors are gonna talk over CXL links. The physical implementation of it is PCI Express, and we've been doing PCI Express controllers and PHYs for what seems like forever. You know, it's a very important part of our portfolio, but it has been something we've been working on for a couple of decades, so we lean on that experience.

In terms of the security cores, what's really interesting is that we have that capability in-house as well, and we have some industry-leading security cores that cover a range of different security levels and needs. As we see the market evolving, security is becoming, you know, more and more important and kind of more of a first-class design constraint that you have to really think about how to architect that into the system as well. We do, of course, do memory controllers and PHYs, and we've worked for a very long time with the industry on qualifying memory. We have our DIMM buffer chipsets that we work with both the DRAM manufacturers and the system houses to validate, and we work with Intel and AMD as well.

That capability is, it's very important to be able to say, you know, your controller chip is validated and works with the memory manufacturer's parts. Really, you know, just that whole notion of how to sew everything together and how to make sure that you can ensure that the chip and the whole system is gonna work together. 'Cause, you know, it's complex. I mean, you're putting multiple pieces together. These are just the areas that we're kind of, I think, uniquely positioned to serve the market.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah. I got a few other quick questions, and I'm gonna bounce around here a little bit. You know, I'm gonna go back to that stranded memory discussion.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

As CXL comes to market and you start seeing these disaggregation of the controllers and new products, you know, supporting CXL, I guess, you know, first question on that, do you see CXL actually diminishing or replacing DDR, DIMMs on a server standpoint? And second to that, CXL, it's not like CXL comes to market, and you can unlock this capacity that's existing, this quote-unquote stranded capacity in your existing data center. CXL is truly a new incremental deployment of memory footprint.

Steven Woo
Fellow and Distinguished Inventor, Rambus

I think what, at least what I see and when we talk to people, I mean, where CXL is really helpful in the initial deployments is there are people that are writing software that are just limited by what the resources are that are inside the box. The initial deployments of extra bandwidth and capacity through expansion, that will address that part of the market. That'll be a good enabler for the first step, really, in trying to improve what's available to applications. In terms of, kind of what comes next and the footprint and all, I mean, we definitely when we talk to people, they're they literally can't get enough memory.

The next step is how do you provide kind of a resource, a large pool of memory that you may not need all the time, but it's something that can be more efficiently deployed more frequently in the data center, and that's kind of where we see the pooling happening as well. There's been a need for that for probably, you know, 10 years we've been hearing this particular problem as well. Yeah, I mean, I think in the end, if it plays out like, you know, it's played out so many times before, I think in the end, the memory, you know, footprint within the data center just continues to get larger and larger, in part because these capabilities are unlocked with CXL.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yeah. I agree. The final question, then I'll pass it to Gary, you know, unless there's anything that I should be asking that I'm not asking you, definitely I'd love to know that. There's been a lot of standards, right? There's a lot of, you know, alphabet soup and acronyms around memory interconnects. I remember writing a note on Gen-Z, OpenCAPI, I think there was CCIX. CXL is solidified. Should we, I guess the simple question is, should we be paying attention to those other standards, or is CXL the, you know, just definitely the defined path as we move forward?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah, I mean, the short answer is, you know, CXL looks like it's the one. Really what we've seen is that, there's broad industry support, there's public roadmaps that people talk about that support top to bottom. The component suppliers, the processor people, the, you know, the data center folks, everyone is out there talking about their support for it. We're seeing a lot of, you know, kind of behind the scenes, there's a lot of movement in the value chain to support this as well. I think the other efforts, you know, they were all very good. They speak to the problem that, you know, memory in particular and just connectivity in general needs to improve.

You know, what we've seen also with Gen-Z is that it's now merged into the CXL Consortium. Many of the learnings that were important in the early years for these other standards, we now get the benefit of them in the one interconnect that everybody's really kind of unifying around, which is CXL.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Yep. I'll pass it back to Gary Mobley unless Steven Woo, you have, you know, any things that I should have asked you that I didn't.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

I do have one closing question.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Sure.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Your silicon IP business, if I'm not mistaken, is now at about $125 million annual run rate. I think of, you know, some of the acquisitions that form the basis for your licensable CXL and PCIe intellectual property has been a key driver to that. My question to you is who do you see as the main competition on the silicon IP side, and who do you see as the main competition on the chipset side of the CXL market?

Steven Woo
Fellow and Distinguished Inventor, Rambus

Yeah. I think it's hard to you know. At least it's hard for me to kind of talk about, you know, like, other companies, 'cause you're sort of not in the huddle with them, you know, to understand sort of, you know, what their plans are and things like that. I think really the thing that's most important in all of this is it's very clear it's gonna be a big market. The things that we provide in our portfolio today, like Silicon IP, there's a lot of applicability of that for CXL-based solutions, whether they be memory-based or accelerator-based. They're gonna need CXL controllers, they're gonna need PCI Express interfaces.

That does, you know, because there's a new type of interconnect that grows the market for some of our solutions that we have today. You know, in terms of, you know, where we go in the future, you know, it's very clear that there's a tremendous need, and there will be other companies coming in. I think it's hard for me to say what the competitive landscape is gonna really look like. You know, what I do think is that we're well-positioned to play well in this space, given all the capabilities we have in-house and then given our experience in, especially in the areas of memory enabling and memory validation.

you know, I think we're in a very good position to play an important part in certainly you know CXL the ecosystem and then an important part in the memory expansion and pooling capabilities for CXL in the future.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Des, did you wanna add something?

Desmond Lynch
CFO, Rambus

Yeah, Gary, let me just add on to what Steve said there. I think on the silicon IP side, we'll continue to see our traditional competitors, Synopsys, Cadence, from there on the IP side. On the chipset side, you know, traditionally on the memory interface buffer chip side, we've had our traditional competitors there. We're seeing new companies coming in to this space, such as Marvell. Microchip has invested as well, as well as newer start-up type of companies such as Astera Labs. It's clear to us with the size of this market, we're going to see increased competition based upon that. You definitely see much more diverse competition on the chipset side, Gary.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Okay. We're running up against time, but before we close, I wanted to ask you, Steve, is there anything that we missed? Is there anything that you wanted to highlight that we haven't covered yet?

Steven Woo
Fellow and Distinguished Inventor, Rambus

You know, not that I can think of. I think we covered a lot of good ground today. You know, I think it is a very exciting technology. It's one of those things that, they don't come along that often. Part of why there's so much excitement is because I think people can see, you know, both the way that CXL addresses the many needs that are going on in servers in the data center, and then, you know, just the fact that there's a chance now to change the direction of architectures. That's always really exciting when you get a chance to see, you know, the opportunity for large scale changes. We're very excited and, you know, it's for the reasons we talked about.

Yeah, we're really looking forward to the future and what CXL will bring.

Gary Mobley
Semiconductor Research Analyst, Wells Fargo Securities

Okay. Steve, Des, on behalf of Aaron, I wanted to thank you for the time that you spent here with us and enjoy your weekend.

Steven Woo
Fellow and Distinguished Inventor, Rambus

Okay. Thanks very much.

Desmond Lynch
CFO, Rambus

Thank you.

Aaron Rakers
Senior Equity Analyst, Wells Fargo

Thanks, guys. Thank you so much.

Powered by