Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

26th Annual Needham Growth Virtual Conference

Jan 16, 2024

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Hello, and good morning, everybody. My name is Quinn Bolton. I am the semiconductor analyst for Needham & Company. Welcome to the first day of Needham's 26th Annual Growth Conference. We thank you for joining us. It's my pleasure to host this fireside chat with Rambus. With over 30 years of experience in high-performance memory systems, Rambus is a fabless supplier of industry-leading ICs and silicon IP that advance data center connectivity and solve the bottleneck between memory and processing. Joining me from the company today are Des Lynch, CFO, and Matt Jones, VP of Strategic Marketing. Des and Matt are gonna start with a few slides, and then we'll begin the fireside chat. Des, Matt, thank you very much for joining us, and over to you.

Des Lynch
CFO, Rambus

Thank you, Quinn, and good morning, everyone. It's a pleasure for Matt and I to be here today at the conference. As Quinn mentioned, we will share a few slides today, which will give an overview of Rambus and how we go to market with our solutions and the challenges that our products solve today. So next slide, please. Before we begin, I just would like to mention our safe harbor statement about any forward-looking statements. I would encourage everyone to read our statements on file with the SEC, as they provide a lot more information on our company than we will cover in this short presentation and discussion today. Next slide, please. Rambus has been a pioneer within the semiconductor industry for the last 30+ years.

The company was founded based upon foundational IP around memory interfaces, which is used in all of today's modern compute systems. The company started as a pure patent licensing business, and the business has been a bedrock of the company. These patent licensing arrangements are long-term agreements, which have and continue to provide us with $ millions of stable cash flow each year. We've used this strong cash flow from patents to invest both organically and inorganically into semiconductor product programs. Really, from an end market perspective, our business is primarily targeted at the data center, with over 75% of our product and silicon IP revenue coming from the data center. Next slide, please. This slide is a great representation of how we go to market with our solutions as a company.

There are three ways that we go to market as a company, which is our patent licensing business, silicon IP business, and our chip business. So starting off with our patent licensing business, this has been the foundation of the company, where we have the foundational IP and memory and high-speed interconnect interfaces, and we have an extensive patent portfolio of over 3,000 patents. In this business, there is no physical transfer of IP, as companies pay us to use our technology. And in total, we describe this as being a $200 million-$220 million opportunity each year, and it's been very stable at the midpoint of this range.

Using the midpoint of $210 million, $150 million of this comes from the top three DRAM companies, and the remaining $60 million comes from a variety of SoC, FPGA, and security companies. In the last year, we have extended our relationships with both Samsung and SK Hynix for a period of 10 years, through 2033 and 2034, which really illustrates both the value and strength of our patent portfolio and innovation engine. This predictable and stable cash stream has really enabled us to invest into the product programs, which directly benefits our licensees. So moving on to the next way that we go to market with our solutions is our Silicon IP business, where we develop leading-edge security and high-speed controller solutions. These IP blocks are sold to our customers, who integrate our IP into larger ASIC or SoC solutions.

Our customer base here is diverse and ranges from large, leading, well-known semiconductor companies seeking the advantages of our IP to start-up companies driving innovation in emerging markets. The business today is at scale, with revenue being on an annual run rate of $105 million-$110 million, and we expect the business to grow 10%-15%, which is faster than the market growth for AI, really driven by the continued growth in data center, fueled by the broadening adoption of AI. The last way we go to market is on our chip business, which is focused on memory interface chips, where we sell buffer chips to the memory module vendors, which goes on to the DIMM modules. These chips play a critical role in controlling the speed of commands between the memory and processor.

This business has shown phenomenal growth, growing to around $227 million in 2022, up from $39 million in 2018, as we continue to grow and take market share. The next generation platforms of DDR5 is ramping into the market today, and with the introduction of the related companion chips, we remain very excited about our growth prospects in the business. Let me now hand it over to Matt, who will talk about some of our product solutions in more detail. Matt?

Matt Jones
VP of Strategic Marketing, Rambus

Yeah, thanks, Des. Whether it's ideas, IP, or chips, as Des talked about the three ways we go to market, each of those segments is focused on solving the critical interface between processing elements and memory in the data center.

Historically, we talked about this as the memory wall, as we saw Moore's Law continue to drive processor performance outstripping memory performance, you know, for many years. The panel on the left here stops about 2010 because we got more sophisticated with the help of our customer partners and the way we think about this. As we've seen workloads become more specific in the data center, we now look at the performance of these workloads on a specific basis. And in the right panel here, you see a representation from Google from last year's MemCon, where they've mapped out specific workloads and the performance.

What you see is, even when you have memory bandwidth at 85%, which is, you know, a fairly peak performance level, CPU utilization is only running at 60%. The drive here for us is to unlock that memory subsystem to provide that efficiency for these CPUs. And we do it in a number of ways, and it provides some opportunities that we're seeing in the data center currently. So some of the things we'll probably get asked about by Quinn and others is, you know, today we see a transition going on in the CPU memory subsystem from DDR4 to DDR5 to offer greater capacity and bandwidth to the CPUs as we see core counts continue to grow in today's data center CPUs.

We see the emergence of serial connected memory and the emergence and convergence around CXL as a serial interconnect to provide a new tier of memory between direct attached DRAM and storage class memory that can provide some exciting memory architectures. And going forward within the data center, the notion of composable resources, you know, each of those dots that the Google team had mapped out there, you know, being able to provide compute memory, storage, accelerators on a per workload basis, you know, becoming a more of a reality as we look for purpose-built data center architectures and enabling those here at Rambus with memory subsystems.

To give some context to some of the technologies we'll talk about, Des talked about our chip business, and we'll start here on the left. Where our products play is on registered DIMMs that go into servers in the data center. And we provide the chips here in blue that support the memory devices, provide that critical interface between the CPU and the memory, provide support and telemetry to each of these DIMMs as they become more critical subsystems on their own, as performance increases.

In our silicon IP business, you see here on the right an example of an AI accelerator or a GPU card, where our silicon IP blocks from security in terms of a root of trust for quantum safe cryptography, the critical interfaces to HBM memory, and then connectivity via PCIe or CXL, all get nicely integrated into solutions that we're showing here on a card, but reside within those critical and advanced accelerators that we see from many today, as we've seen AI come to the fore. As we move forward, we continue to see challenges and certainly look forward to sharing future roadmaps.

The focus for Rambus is unlocking the memory bandwidth to increase that efficiency for today's and tomorrow's computing engines. Some of the challenges we see, you know, increasing capacity for more DIMMs per channel creates, you know, other problems in terms of pin count and other physical challenges that need to be solved. So how do we, you know, come up with creative DIMM architectures going forward? The continued move towards process scaling and CPU integration from dual-socket servers, potentially into single CPU servers, and some of the challenges that, you know, that brings forward with serial attached memory. Certainly, power and thermals provide the envelope for which we work.

So we continue to invest, solving today's problems with some of the transitions we talked about in the data center, and look forward to continuing to try to knock down that Memory Wall, as a company.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Perfect. Well, Matt and Des, thank you for those introductory slides. Des, you talked about the sort of change in the company's strategy, moving from sort of a patent licensing and intellectual property licensing company to a product company. But maybe spend a minute just sort of the thought process, or the rationale behind that change in strategy to focus more on silicon solutions.

Des Lynch
CFO, Rambus

Yeah, it's a great question, Quinn, and as we mentioned in the overview, Rambus was founded over 30 years ago by defining that fundamental technology that's used in-

Memory interface solutions. You know, the company started as the pure patent licensing business, and over time, we've really transformed the company into a technology company, where we sell semiconductor products through our chip and IP solutions. The patent business is still an important part of our business, where we generate millions of dollars of stable cash flow each year, and that really enabled us to invest into our products. Our investments into products really started with our silicon IP solutions, where we sell the security and high-speed controller solutions, and we've done really a nice job here of growing the business to scale over $100 million in revenue from there. It's really in our chip business where we've seen phenomenal growth in the business model, which is really the high growth engine of the company today.

With the continued need for more memory performance and capacity to keep pace with really the computing advances, we continue to see growing market opportunities for us, both in the data center and client end markets from there. But overall, we've been very pleased with the transition of the company from that pure patent licensing model into the semiconductor product company, and we're very excited about the growth opportunities that lay ahead of us, Quinn.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Let's wanted to spend some, some, minutes now on, on the chip business. I think in, in your comments, you'd mentioned that, silicon business had grown from, I think you said $30 million to over $220 million, so obviously phenomenal growth. Where, where do you see your current position in the memory interface market?

Des Lynch
CFO, Rambus

Yeah, it's a great question. Really, what we've done, we got into this business really, given our rich heritage in memory interfaces, and one of our patent licensees encouraged us to enter into this market. And really to accelerate our own internal development, we acquired the assets from Inphi in 2016, and really our performance since then has really been exceptional. We've been laser focused in executing the product strategy of producing quality and reliable products, to the market, and that's really been reflected in both our revenue and market share gains. As you mentioned, we grew from $38 million in 2018, almost 0% market share. In 2022, the business was around $227 million, which we got to about mid-20% market share, predominantly on the DDR4 solutions.

What we've done is we really invested early into these cycles and really executed very well with the sub-generations of the DDR4. That's where we've been able to continue to grow from there. And also, I would say an important point in our growth really has been how we managed the supply chain challenges of the past couple of years and we executed that very well, which strengthened our relationships with our customers. But really, the exciting part of our strategy is sort of going forward, and that's the transition to DDR5, which is taking place in the market just now. Again, we invested early into that DDR5 cycle, and we're first to market with our first-generation solutions in 2017, and we're really excited about the growth opportunities ahead of us in this area, Quinn.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Yeah, I've got a couple of questions on DDR5, but maybe before we get there, can you just for investors listening to the webcast, you know, how large is the memory interface market, and what are some of the key drivers? Is it, you know, increasing memory channels in next-generation processors? You mentioned DDR4 to DDR5. How much of an ASP premium might you see as you transition to newer versions of DDR in this market?

Des Lynch
CFO, Rambus

Yeah, that's a great question there, Quinn. You know, what I would say is the RCD market was around $800-$850 million in 2022, and we expect the market in 2023 really to have contracted by around 10%-15%. The decline in market size in 2023 was really driven by a combination of the DDR4 inventory digestion, as well as a delayed rollout of the DDR5 solutions. But really, despite the market decline in 2023, and we have gained market share, it's our revenue performance in 2023, using the midpoint of our guidance for Q4, will be relatively flat year over year in a market that has declined double digits. So that really speaks to the strength and execution.

Looking ahead to 2024, and we do expect the market to rebound in the double-digit sort of range from there, really driven by the growth in AI server clusters, as well as the general purpose compute. You talk about the sort of drivers of the sort of market growth and really the additional memory channels, which Matt talked about upfront, really offers, DDR5 offers an increased opportunity per server. The increased speed of the data solutions of DDR5 requires more dedicated memory channels. So under DDR4 solutions, you had 6 dedicated memory channels, and if you look at the first generations of products, both from Intel and AMD, and they increased to 8 and 12 channels respectively, from there.

So that's a nice sort of growth driver in the market size for us. Lastly, I think you asked about the ASP premium between DDR4 and DDR5. I think it is important to note that the DDR4 solutions have been in the market for the last 7+ years, so you can certainly assume that the price has really eroded over time. And this generational change from DDR4 to DDR5 does offer us an ASP reset, which is obviously beneficial to Rambus. And as well as what I would also point out is the sub-generations of DDR5 are coming around faster. They're coming around every 12 months, if you follow Intel's roadmap, as opposed to every 24 months under DDR4. So we will see an ASP reset in each sub-generation of DDR5.

Overall, we're very pleased with the growing market opportunity that DDR5 offers to us.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

It sounds like with faster generations, price resets, you know, DDR5 is an important, pretty important driver. And I think you've been first to market with that key RCD component, but maybe what are your expectations? You talked about mid-20% share, I think, at the end of 2022, probably moving a little higher here in 2023 as your revenue's flat, where the market's down. What are your expectations for market share in DDR5, and when do you think DDR5, you know, crosses over to the dominant memory technology in the server or data center segment?

Des Lynch
CFO, Rambus

Yeah, it's a great question, Quinn. You know, we're really excited about the transition to DDR5. And as I talked about, we have invested early into that DDR5 cycle, and being first to market with having working silicon, it really helps and drives that first mover advantage, really translates to market share gains, as you get to work with the ecosystem on signal integrity or interoperability of these chips. In DDR5, we are targeting getting towards 40%-50% market share, and we'll continue to invest and lead in these solutions. Just in December, we did announce our fourth generation DDR5 product, which has speeds up to 7,200 mega transfers per second, which will ramp into the market in 2026, and that's up from 3,200 mega transfers per second under DDR4.

So we continue to invest and lead here. In terms of the timing of the market, we're really in the first innings of sort of DDR5. We saw volume productions of the first generation of DDR5 products really ramping in the second half of 2023. Really consistent with others in the industry, we see the industry sort of crossover, meaning DDR5 is the predominant module shipment really occurring in the first half of 2024. At Rambus, we did see the crossover early, earlier than the industry, given some of the inventory digestion of DDR4 from there. But we're really excited about the transition to DDR5 and our position within the market, and this offers the market share gains and market opportunities ahead of us.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

You've mentioned the DDR4, you know, inventory a couple of times. Just where do you think we are in that inventory clearance process? You know, how important to your forward revenue would a recovery or an end of inventory digestion be in DDR4?

Des Lynch
CFO, Rambus

Yeah, that, that's a great question. We've, we've certainly been talking about the inventory digestion on our earnings call, and what we've talked about is a lumpy transition between DDR4 to DDR5, really driven by some of these inventory challenges from there. The DDR5 buildup of inventory was really a function of the supply chain challenges of the past couple of years, as customers built up some inventory in 2022 to protect against some of these supply chain challenges. And what we have really said is that the inventory digestion really took place in Q2 of 2023, and it has been a headwind for us throughout the remainder of 2023.

And we have shipped out minimal products to our customers in this period, and what we've talked about on our earnings call is that both in the June and September quarters, we've seen inventory levels come down, our customers. What we've mentioned is on the December quarter, again, we were shipping minimal DDR4 products, as customers took a conservative posture towards their inventory management towards the year-end. But what we do expect is that we expected this to come back into the model in the first half of 2024, and we do expect to see that DDR4 will have a long tail of demand, still continuing 12 to sort of 18 months, which is really consistent with the prior generations of the DDR cycles, which will continue for that sort of timeframe.

We're still working with the customers on the timing. Customers are holding different levels of sort of inventory, so it will not be linear in the sort of Q1 timeframe. It will come back into the model over the sort of first half of 2024 from there.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Got it. Got it. In one of the slides, the product slide that Matt covered, it showed not only the main RCD component, but the SPD hub, the temp sensors, and I think Rambus has strategied also expand into those types of solutions as well. Can you talk about your efforts there, and what kind of TAM expansion, you know, are you seeing from those opportunities? Where are you in terms of market share, or have these new chips started to contribute to the revenue stream?

Matt Jones
VP of Strategic Marketing, Rambus

Yeah, Quinn, that's a great pickup, and the transition from DDR4 to DDR5, in addition to the expansion opportunities that Des talked about in terms of the advantages we have with early investment and some of the share gains. The definition of the Registered DIMM changed to include those companion chips, as we and others have termed them. The SPD Hub, the Temperature Sensor, and the Power Management IC, or the PMIC, all moved from the motherboard in previous generations onto each of the memory modules, given the importance of-...

Each of these RDIMMs now to the system, the driving performance and complexity of them, needing to monitor and provide telemetry via those devices back to the CPU for DIMM health and performance. When we look at this new opportunity for us, we look at this as somewhere in the neighborhood of about $600 million opportunity for the combination of those devices above the traditional buffer chip or RCD. We've taken a very deliberate approach entering the space, focusing first on the RCD execution with our team to drive that early investment, that early position Des talked about.

And we see our companion chips, the SPD Hub and Temperature Sensor that were previously announced, and the PMIC, which we've talked about the status of, having silicon and providing that to customers more recently in our earnings calls. We see that picking up, coincident with Gen 2 of DDR5, as we see the rollout of processors like Emerald Rapids from Intel, sometime beginning in the second half of 2024.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Got it. Got it. One of the questions we'll get from investors, you know, a number of companies have exposure to general compute. You know, the question is, gee, general compute may be, at least in the near term, negatively affected by a shift in spending to AI. And so how do you view that transition in spending from general compute to AI in the near term? Is that a benefit? Is that a headwind, you know, or perhaps maybe just to step back, how does the company benefit from the adoption of AI in the business?

Matt Jones
VP of Strategic Marketing, Rambus

Yeah. Certainly in 2023, as AI exploded onto the scene and into an environment where CapEx budgets had already been set, we read and saw a lot of the share of wallet friction that others saw in terms of AI potentially competing with general compute. For Rambus, in our product business, both silicon IP, we see this as a driver of things like our HBM core, some of our interface cores in CXL and PCI Express. As we showed that example in the brief overview of an AI accelerator. Our silicon IP business fits very squarely and directly into those solutions.

From a chip business perspective, the DDR content in an AI server outstrips that of a general purpose server. So very exciting to see that high growth and the adoption of DDR5, in addition to HBM, in AI servers and clusters. And then across the data center, the support infrastructure for that funnel of data that takes it to that last mile, if you will, the AI training engines, you know, we'll see an upgrade and drive for performance. So we see AI as a tailwind for the adoption of DDR5. Certainly looking at, you know, the relative growth rates and some of those friction points, if you will, in terms of wallet we saw in 2023. See how that will progress in 2024.

But extremely excited about AI and the opportunities it provides our engineering team to deliver future solutions as well.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

It, it sounds like, AI, you know, certainly helps the DDR5 business, but, but also you mentioned, some of your silicon IP business, and maybe this is a good transition point to, to move into the silicon IP business. You know, at a high level, can you just describe the silicon IP business? What are some of the core, IP blocks that you sell to customers? And maybe just describe who are the typical customers, for your IP business.

Des Lynch
CFO, Rambus

Yeah, it's a great question, Quinn. You know, in silicon IP, we develop IP building blocks where customers really take our IP solutions and integrate them into larger ASIC or SoC solutions. And the portfolio today is really made up of leading-edge security IP and high-speed controller IP solutions. In security IP, with the continued drive towards the heterogeneous computing environment, we are seeing the growing importance of security hardware solutions. And really, our portfolio offers solutions for data at rest and data in motion through both our IPsec and MACsec technology solutions. In controller IP, we have a range of solutions which really solve the critical problem of interfacing the processing element with the attached memory.

Our solutions here in the high growth areas of GDDR, HBM, which can really help with the AI accelerators, as Matt talked about earlier, and we also have CXL and PCIe controller solutions as well. In terms of the customer base, we see really a breadth of customers in this business, which ranges really from start-up companies all the way through to well-known semiconductor companies.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Matt, you mentioned a couple of, I think, key buzzwords, at least from the investment community, HBM and CXL. So maybe let's start with the HBM. You just recently introduced, I think, a new version of the HBM3 controller that has 60% higher gigatransfers than anything on the market today. Can you just sort of elaborate on the HBM opportunity that you see?

Matt Jones
VP of Strategic Marketing, Rambus

Yeah, Quinn, certainly, you know, as we see the emergence, at least the public emergence, it's something everyone's been working on for some time. But the, you know, the explosion of AI, it's coupled with, you know, the, the market and the, the industry really looking at continuing to advance and commercialize and harness that power. And we see a real step up in the desire for faster HBM. So we were pleased to announce the upgrade to our HBM3 controller. This, you'll hear, finally some coalescence in the market about HBM3, the next generation of HBM memory. I think we've all settled on that as the moniker for that next generation.

So this is, you know, today's developments are going on around, HBM, DDR3, the next generation of chips. We've been working with customers with that IP block, for a bit of time as we led towards the, announcement. And you'll see that, higher speed, HBM memory and, and our controller solution, drive, the next generation of, AI accelerators and other applications.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Perfect. I also wanted to touch just basically on the CXL market. What do you offer in CXL or PCI Express in terms of controllers? You know, are you in the IP segment today, but do you see that potentially becoming something where you might offer a standard silicon product at some point in the future for CXL or PCIe?

Matt Jones
VP of Strategic Marketing, Rambus

Yeah. So, in terms of the Silicon IP business, as we touched upon in the brief overview in the cartoon we showed of an example system. Certainly the IP blocks for PCI Express and CXL across all generations have continued to help drive our Silicon IP business to scale. The diversity of customers and the interactions with that ecosystem continues to drive our innovation engine. And we've been very excited about the IP play. But we've talked over the past couple of years about the emerging opportunity for silicon solutions for Rambus and specifically around CXL-attached memory and the silicon solutions that would enable that to support things like memory expansion.

So, being able to use the CXL serial interface as a means for putting more DRAM into a server for things like containing large language models for AI. The notion of memory pooling for providing more efficiency of use of memory amongst processing elements in heterogeneous compute models. So we remain very excited both about the IP that we provide across a range of solutions for CXL, and also the continued work our team is putting in on potential chip developments going forward for those CXL-attached memory solutions, Quinn.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Perfect. And then, Des mentioned that to this, the traditional patent licensing business, I think, you know, kind of a running in a $200-$210 million, you know, level, sort of being a, you know, very stable foundation for the business. Is there any other details you wanna share on, on the patent licensing business? Are there other big renewals coming up? I think you mentioned renewals at Samsung and SK Hynix. You know, there's a third big DRAM provider out there that, that maybe at some point comes due, but, any, any big renewals you're, you, you might be looking at in the next year or two?

Des Lynch
CFO, Rambus

Yeah, that's a great question, Quinn. You know, as we talked about, the patent licensing business has been very stable at the $210 million sort of midpoint from there. And over the past year, we've been delighted both with Samsung and SK Hynix renewals to the 10-year terms, which again, I think speaks to the strength of the patent portfolio. I think it is important to know that the top three licensees and patents are also our top three customers on the chip business from there. And really, the licensees see that we are reinvesting the patent dollars back into product programs where they will directly benefit from.

So really by signing these 10-year extensions, which is greater than the typical patent extensions of 5-7 years, it really signals that customers want to continue to collaborate with us on products going forward, from there. You are right, the next major renewal we have is with Micron, which will be in Q4 of this year, 2024, and we really expected that will come down to the wire of the negotiation there, which is more typical of patent licensing discussions from there. But overall, we've been very pleased with the stable foundation that the patent business has really offered to us.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Perfect. We've got a couple of minutes left. I've got two questions from investors. I think similar to my question on CXL, this investor is asking, "Does Rambus see an opportunity, you know, in product revenue for HBM? Can you give us some examples of how?

Matt Jones
VP of Strategic Marketing, Rambus

Yeah, Quinn, I think that as we talked about from the innovations through the IP, through the chips, the critical problem that we solve is the interface and unlocking the potential between CPU or processing element and memory. As of now, HBM doesn't have a chip opportunity for that interface. The way we solve that problem is via the IP block that we sell. Longer range, we certainly have a very smart group of people, you know, working with customers on, you know, what the AI memory architecture of the future might look like, HBM or otherwise. But as of now, you know, that critical problem gets solved via a silicon IP block that gets integrated into things like accelerators, GPUs, TPUs, et cetera.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Got it. And the second question, perhaps a little bit more technical, but do you see memory technology to become more critical with AI training, specific SIMD, single instruction, multiple data architecture?

Matt Jones
VP of Strategic Marketing, Rambus

Absolutely. You know, one of the things that we talked about sort of long range, some of the problems that we see, you know, the ability to move memory closer to, and more memory closer to, these processing elements, provide faster memory and near memory compute, for example, you know, closer to these accelerators. To increase the efficiency to solve that problem we talked about in the brief overview. But also solve some of the energy problems that we see as well. Moving data around systems, both energy and security become paramount there. So we certainly don't think we've got this beat. You know, the Memory Wall that we talked about has existed.

We'll continue to endeavor to solve this, but certainly a lot of interesting things for us to continue to work on, Quinn.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Perfect. Well, I think that brings us to the end of our session. So Des, Matt, thank you very much for joining us to the Needham Growth Conference. We really appreciate it, and hope you guys have a great rest of the day.

Des Lynch
CFO, Rambus

Thank you, Quinn.

Matt Jones
VP of Strategic Marketing, Rambus

Thanks, Quinn.

Quinn Bolton
Managing Director and Equity Research Analyst, Needham & Company

Thanks, everybody.

Powered by