Rambus Inc. (RMBS)
NASDAQ: RMBS · Real-Time Price · USD
146.81
-11.59 (-7.32%)
Apr 27, 2026, 12:17 PM EDT - Market open
← View all transcripts

Wells Fargo 8th Annual TMT Summit Conference

Dec 3, 2024

Speaker 1

Year or a couple of years. How would you characterize those? What would be the three or four top key drivers for you guys? And then certainly I'll get into a lot more detail.

Luc Seraphin
President and CEO, Rambus

Sure. Well, the first driver is the data center growth itself. The growth of servers for data centers. And there are two types of servers. We have you know AI servers which actually use you know standard processors and standard memories as well. And the traditional servers. The combination of these two types of servers is a driver for growth you know for us. Whether you talk about AI servers or traditional servers, there's need for more bandwidth and more capacity. And the way to increase bandwidth and capacity in a server is by either adding memory channel for each processor that you have or/and adding a number of DIMMs or number of modules that you have per channel.

So what we see in the market is that there's a growth both in the number of channels that are available with the rollout of new processors from Intel, AMD, and others. But there's also we see customers populating those channels with more DIMMs because they need more bandwidth and more capacity. So you know all of these combined increases the unit growth for DIMMs and therefore translates into growth for us. The second vector is really content growth. You know as the market transitioned from DDR4 to DDR5, some functions that were traditionally on the motherboard in the DDR4 generation transferred to the module itself. And these functions are you know the power management chip, the controller chip, which we call SPD Hub, and temperature sensors. So that adds content onto the DIMM when we move from DDR4 to DDR5. And that's the second driver for growth for us.

So, growth in demand for more capacity, more bandwidth, and with that, an expansion of the content on each one of those memory modules.

So, thinking about that, and, you know, we've got three pieces of the business really that you guys segment, right? You've got an IP license, you've got silicon IP, and then I think the most exciting area of incremental growth is what you're hinting at, which is the product area, the chipset. I think today, and you're gonna correct me if I'm wrong, you know, 90%-95% of your product revenue is on the RCD, the register clock drivers. How do we think about that progression, that content expansion into things like the SPD hubs and the temperature sensors, the PMIC, you know? Ultimately, I'm gonna ask you, like, what, how do you define success, right, from that perspective?

Yeah. So you're correct that today, you know, most of our revenue is coming from the RCD chip. That's the most important chip on a memory module.

Yep.

That's the most complex chip also to design and to make. If we roll back to our past, you know, that's where we started the business. We started by delivering, you know, these RCD chip. And little by little, we gained share from 0% share to, you know, a little more than 30% share last year. And we're growing, you know, we continue to grow our share. We started with that strategically because that's the most important chip on the module. That's the most difficult to qualify, the most difficult to make. But as the market transitioned to DDR5, we started the development of these other chips, the SPD hub, the temperature sensor, and the PMIC. And we're starting to ship those, you know, in qualification volumes this year. They're going to start contributing this quarter, and continue to grow quarter after quarter next year.

In terms of market size, you could say that the RCD chip today is around $750 million in size. You know, the combined market size for companion chips, so combined SPD hub, temperature sensor, and PMIC is adding about $600 million to that size. So that's a kind of a doubling of the size approximately.

Yep.

By adding those chips, but we are at different stages of qualifications with these new chips. They're gonna ramp into the market in 2025 and beyond.

Yep. And just, like, you talked about TAM. That's very helpful. I mean, the content side of it, like the dollar content per these DIMM, you know, modules that you're seeing. You know, I think, you know, we've talked about it, you know, the math. You know, just walk us through the $4 content on the RCD is, you know, double the content as we get into.

It's about doubling the content. We don't provide exact pricing because we have.

Yep.

You know, two competitors only and three customers. But yeah, that's about, you know, that's about right.

Mm-hmm.

And then last month we announced you know the availability of MRDIMM solutions for 2026 and beyond. And then there's gonna be another increase in content.

Yep.

Because we're gonna add some additional chips on these, MRDIMM modules.

Yep. And so again, I'm gonna go back to the prior question. How do you define success? I mean, you've talked about success as being 30% plus market share. I think that was more in reference of DDR4. You've been fairly confident in what you see as an expanded market share opportunity as we move through D5 or DDR5, you know, that transition.

Yes. When a factor that is important in gaining share is the ability to deliver your products to the market fast because the qualification of these products takes time in the ecosystem. So the way we gain share, you know, in the DDR4 generations of products was by, you know, introducing our products faster than our competitors. And we continue to do that in the DDR5 generation. But strategically, the reason we introduced the RCD in DDR5 before the companion chips is because, as I said earlier, it's the most complex chip. And we didn't wanna miss that transition. It was very important for us to make sure that we were the first, you know, with DDR5 chips, you know, of any generation.

What this translates into is that if you look at, last year, for example, on the DDR5 side, we were close to 40% share, which is, you know, higher than what we ever had on the DDR4.

Mm-hmm.

As DDR5 is continuing to grow as a total of the share of the market, you know, we expect our share to continue to grow, you know, from 40% up.

Yep.

But it's very important first for us to have introduced the RCD chips first, and then, you know, these companion chips are gonna add revenue to that.

Yep.

And just kinda thinking about the P&L, we do all of this in the same context of that gross margin. I'm jumping around here a little bit, but the gross margin, for the model, you know, stays consistent as we see the expansion of the companion opportunity.

Des Lynch
CFO, Rambus

That's the way to think about it. We've talked about the chip product gross margins being in that sort of 60%-65% range. If you look at the P&L across the past couple of years, we've been in that 61%-63% sort of range. And with the addition of the companion chips and the other chip opportunities that Luc talked about coming into the model, we do believe we'll be able to hold our sort of gross margins being within that 60%-65% range.

We'll continue to be disciplined in our pricing approach and continue to drive the sort of manufacturing cost savings to maintain the margins from there.

So rolling this all, kinda taking a little bit of a step back in the process of getting the opportunity to cover you guys, you know, we really kinda looked at the TAM and built up, you know, 'cause we can see server units. We can make some assumptions around DIMMs, you know, per server. But, you know, right now, maybe, you know, help us appreciate, like, where we're at as far as channels per CPU per socket. I think we're at the beginning stages of really a full breadth of 12. Does 12 go to 16? Does 16 go to something higher? I mean, how do you think about that vector of longer-term, you know, growth?

Luc Seraphin
President and CEO, Rambus

Yeah. So you're correct. You know, what the market needs is more bandwidth and more capacity. And as we said earlier, it translates into the number of channels and the number of DIMMs per channel. You know, at every generation of processor introduced by either AMD or Intel, they were increasing the number of channels. I think if you look at the next generation, Granite Rapids, and Turin, you know, there's gonna be 12 channels. 16 channels, you know, people are talking about it. It's not on the roadmaps yet. Maybe it's gonna be in the DDR6 type of generation. We have to understand that the more channels you add, the more pins you have on the processor. You know, it's more of a question of processor architecture.

Yep.

Des Lynch
CFO, Rambus

There's a limitation to how many pins you can have on a processor. You know, every time you add a channel, you add a large number of pins.

Yep.

Luc Seraphin
President and CEO, Rambus

So, 12 seems to be the number now. You know, AMD was the first at 12. You know, Intel is gonna get there. So that's what we see at this point in time, 12, 12 channels.

You mentioned DDR6, but you know, there's a lot of successive iterations of DDR5 still in front of us. So the timetable of DDR5 to DDR6, how I don't you know.

Yeah. You're correct. We have several generations of DDR5 ahead of us. You know, Gen 1 is in production. You know, Gen 2 is starting in earnest now. You know, we are pre-qualifying, qualifying Gen 3, and there's Gen 4. We announced last month, you know, Gen 5 and MRDIMMs. So I think we have several years ahead of us with several generations of DDR5 where the speeds keep going up. But the industry is always thinking about the next, you know, generation. You know, DDR4 lasted for more than seven years.

Right.

So that gives you an idea. So DDR6 is gonna come up, you know, beyond 2028, 2029. It's not defined yet. But the good news, though, is that the industry is working together to find what the definition of DDR6 is going to be. And that's a good driver for this industry because once the industry agrees that everyone can develop products that meet those.

Yep.

Those requirements, but we have several years of DDR5 ahead of us.

So the simple way I think about it is the more that these architectures expand, core count, you know, density, the more memory plays a critical role in being able to feed those cores, right? The complexity is what your business is built around. More complexity in memory architecture, the better for Rambus.

Yeah. That's a good way to summarize this. You know, the more cores you have on a processor, the more memory you have to add. And it's not proportional, if you wish. You know, there's a factor of, you know, a square factor between the two. You know, if you add cores, you know, on the processor, you have to have much more memory. So more cores means more memory, which is good. But it also means, you know, different memory architectures, you know, novel memory architectures to allow these cores to access, you know, the data. And that's where we play. And that's why, you know, we believe this is becoming critical to, you know, all architectures in data centers, whether they are AI or not AI.

You know, the ability to think about how these cores can best access memory is really what we do as a company. And that's why we see nice growth in our business going forward.

So what I was tiptoeing down the path towards is this MRDIMM discussion. It came up last quarter. We don't yet see standard products in the market. But to me, it sounds like you mentioned earlier, 16 channels maybe, real estate footprint's an issue around these SoCs, around these sockets. So now we're looking at MRDIMMs. Can you explain, you know, what is an MRDIMM? What does it mean? And, you know, reiterate the comment you made on last quarter conference call around the content expansion opportunity you see.

Yeah. One of the fundamental questions we talked about is, you know, the memory technology does not evolve as fast as the processor technology. So we need to think about new memory architectures. One of the aspects of that is that, you know, speeds on the processor side, you know, evolve much faster than speed on the DRAM, you know, memory. So the whole idea behind an MRDIMM is actually to multiplex two memory ranks, you know, onto a processor memory bus. And to make it simpler is that you know, you can actually run your, you know, the memory bus on the processor side at full speed when the memory is actually half of that speed by multiplexing it. And by doing so, you can add, you know, capacity and bandwidth onto these modules.

But these modules are gonna have slightly different architectures. They'll continue to have an RCD, which we call the MRCD. We're going to continue to have temperature sensors, SPD hubs, different type of PMIC because you have to drive more power into different, you know, power architecture. But we will also have what we call MDB chips. These are new chips.

Data buffers.

Des Lynch
CFO, Rambus

10 of data buffers that are, and 10 of them on each of the modules. So that's gonna continue to increase, you know, the content on these modules. So we're gonna be able to go to speed up to 12,800, which is much faster than, you know, the current generation of DDR5, which is at 5.6. And we're going to add content, quite nicely. Yeah.

I'm gonna double-click on that. I think last quarter you said 4x content. If I'm thinking that your true opportunity set with companion chips goes from 4 to 8, I'm talking 8 times 4. I can quickly do math. We're talking $30 of content on these MRDIMMs.

Luc Seraphin
President and CEO, Rambus

Yeah. As I said, we never give, you know, exact pricing.

Yeah.

In the current environment, but that's a good way to look at it. You know, 4x, you know, and this type of ASPs is the right way to look at it.

And so when we look at our spreadsheets and think about this, that is an opportunity set that presents itself when?

2026. You know, I think the initial volumes are gonna be in 2026. This is a market that is really linked in terms of cadence of product rollout to the rollout of processors. So this will intercept, you know, the next generation of Intel processors and the equivalent on AMD, which is Diamond Rapids.

Diamond Rapids and Zen, Zen, Zen 6 from EPYC on AMD. Yeah. It's kind of the time frame. And not to discount possibly ARM architectures too.

Yeah. And that's a very important question. You know, whether the processors are based on x86 type of architecture cores or whether they're based on ARM or others, we kind of agnostic to this.

Mm-hmm.

You know, our chip really has to do with the memory interface, not with the core itself. And, and the memory interface is still DDR, you know, whether it's a, it's an ARM processor or an x86 processor. So we kind of agnostic to the share, movements between Intel and AMD, and we agnostic to, you know, the share movements between ARM-based architectures and other types of architectures.

Mm-hmm. Okay. I know we've talked about it, and maybe it's in the context of a little bit of the MRDIMM discussion, but like, there's been a long discussion over the last few years about CXL and the role Compute Express Link might play in providing more memory bandwidth, more, you know, memory capacity per these CPUs. Where does that stand?

So we still have, you know, some very good CXL activities on our silicon IP business. You mentioned earlier that we have a patent licensing business.

Yep.

Silicon IP business and a product business. Our current activity on CXL is really on the silicon IP business. So we do develop CXL cores or CXL controllers that we sell to semiconductor companies who integrate those cores into their chips, their ASICs or their GPUs, TPUs, etc. The CXL is based on a serial interface. So you actually transfer data on a serial link as opposed to a memory link, which is a parallel link, and the idea, the initial idea of CXL was, you know, once you have completely populated all the channels, we were talking about eight, 12 channels earlier and two DIMMs per channel.

Once you have populated all the channels and you cannot add memory around your processor, you know, one idea was to add additional memory through a serial bus with a serial protocol, in that case, CXL. There are some complexities associated with this, especially, that have to do with latency, the time it takes for the data to transfer.

Yep.

Des Lynch
CFO, Rambus

To be synchronized with the rest. But this was, the initial idea was, you know, memory expansion. We believe that, you know, CXL is a definition of an interface. It's not a definition of a chip. It's not like the buffer chip where everyone agrees on the definition of a chip. CXL defines an interface. So we see the CXL market today as being very fragmented. You know, a lot of chips actually present a CXL interface, but all of these chips are different. You know, everyone has their own version of this. So the CXL chip market is a real market, but it's a very fragmented market. You know, we believe, you know, once the industry, if and when the industry converges to the definition of a chip, then the economics of having a pure CXL chip is gonna make sense.

I think there's gonna be for the memory expansion type of use case, there's gonna be the question of whether, you know, MRDIMM is a better solution to, you know, a CXL attached memory module.

Yep.

Because MRDIMM, as we said earlier, allows you to add capacity and bandwidth, but on the current bus with the current software and hardware architecture without having any of these latency challenges, so the way we look at it is we continue to have very strong engagements on the IP side on CXL. We continue to work, you know, on our chip, which we don't commercialize at this point in time, but which we use with our customers to precisely understand those usage models and see, you know, whether that's going to, or where it's going to evolve to.

Right, so the other one product I know we didn't touch on, but maybe it's a longer-term opportunity is the high-end PC market opportunity around, you know, clock drivers. Can you talk a little bit about that and when we should think about modeling that out, putting that into the product revenue stream?

Sure. So the idea here is very similar. You know, some of the challenges that the data center servers faced, you know, in the past are starting to appear on the client side. You know, the more speed you want to have on a client system and the more capacity you want to have on a client system is going to translate into the need for, you know, chips that you didn't need in the past. You had them on the data center side, but you don't have them on the client side on this, you know, on the current generations. But when speeds on the client side exceed, you know, 6.4 gigatransfers per second, then you start to have the need for the equivalent of an RCD chip, which we call, you know, the client clock driver.

You know, down the road, we're gonna have needs for, you know, similar functions as, you know, power management chips. So all of these technologies that we have developed over the years for the data center are going to waterfall into the client space for high-end clients.

Yep.

So it's gonna be the high-end client first. You know, to give an idea of an intersection point, it's gonna be intercepting the high-end of the Arrow Lake platform from Intel. It's gonna only touch, you know, maybe 10% of the market because you know, it's really the high-end segment of that. But that's very important for us because, you know, these fundamental technologies, you know, clock recovery, which we call signal integrity or power management, are going to waterfall from, you know, the data center space to the client space. So the market's gonna be modest. So, you know, it's gonna be a $200 million type of market. We're gonna, you know, start to see shipments, you know, towards the second half of next year. But, you know, this is not gonna be the biggest contributor.

Mm-hmm.

To our revenue, but it's a critical move into the client space, you know, for us.

Perfect. So we've gone through a lot of the product stuff. I'm gonna bring it all home a little bit. Desmond, can you talk a little bit about, like, how we should think about the growth of the three businesses? You know, you've talked a little bit on calls, but, you know, what I really wanna, like, how do I think about the growth? All of this together seems like a really good growth setup on the product side. But walk us through the growth profile a little bit.

Yeah. That's a great question. You know, as you mentioned earlier, there's three ways that we go to market. First is our sort of patent licensing business. This has really been the foundation of the sort of company, and really, the way that I would describe patent licensing as being in a $200 million-$210 million opportunity. It has been stable, produced consistent sort of results for us. That's the way to think of the business going forward there. The second way we go to market is on our silicon IP business. Our silicon IP is where we develop the building blocks of IP and sell it to customers who integrate this into their larger ASIC and SoC solutions. In terms of the business performance this year, it's been about $120 million. That's up about 10% compared to the prior year when you adjust for the PHY divestiture.

And the right way to think about this business on a go-forward basis would be growth of about 10%-15%, given our exposure to the high-end growth areas of data center and AI, from there. In terms of the chip opportunity, you know, I think you've touched upon this sort of growth drivers. The foundation of our business really has been our RCD chip, sort of business, and we've performed very well there with the market share gains that Luc talked about and getting the DDR5 RCD chips to maybe 40%-50% of the market is the right way to think of that. You start layering in some of the new chip opportunities that Luc talked about. In terms of the companion chips, you know, Luc talked about the market being about $600 million, in terms of market size.

I think once we start to go out a couple of years, a realistic expectation in terms of market share would be around that sort of 20% sort of market share. But we're gonna grow into that, as we go throughout 2025 and into 2026 and beyond from there. So we're very excited, and then you start talking once you get further out, Aaron, with some of, you know, smaller contributions of the client revenue, MRDIMM coming in. So we're very excited about the chip opportunity ahead of us.

Right. And that, just so I'm clear, the 700 and 750, and then the 600. So you're saying think about 40%-50% share of the core RCD, 20% market share companion. Those TAMs would not yet include arguably the MRDIMM opportunity. Is that fair?

That's correct. That's the right way to think about it.

Yep.

Yeah.

Okay. And as you think about the model progression, we talked about gross margin, you know, sounds very consistent with what we've talked about in the past. How do I think about operating margin or operating expense management? 'Cause to me, you look at these opportunities, you keep a pretty consistent gross margin, and Rambus actually has a, you know, decent amount of operating leverage.

Yeah. That's a good question. In terms of the OpEx sort of spend, if you look at our R&D spend today, we're probably operating around 25% of revenue, about $140-$145 million of R&D expense. If you look, what I've said is that we need to continue to invest into the business, you know, to fund these high-growth opportunities ahead of us, and a realistic sort of expectation would be in that sort of 23%-25% of revenue, so you will see some leverage there, but we need to continue to invest at the right levels to make sure we can continue to grow the top line from there. If you look at the SG&A side, we've been fairly consistent in the $18-$20 million per quarter sort of range on the SG&A side. We've made our OpEx investments there on the infrastructure side.

So what you'll see is maybe inflationary type of increases there, and you'll get to see some nice sort of leverage there. But really, what you see is a really nice model that we have. You're looking at op margins of being in the sort of 40%-45% sort of range with very nice flow through down to EPS as well as cash generation from us. You know, cash generation last year was over $200 million cash from operations. So we have a very nice model that's been very sort of disciplined in our approach, and you'll get to see nice leverage going forward as we continue to grow the top line.

That's a perfect overview. And kind of at the high level, the demand side, like I said, we went through, you know, up until this last maybe quarter or two, you had a lot of overhang from DDR4 inventory. I mean, we all saw the traditional server market was down, you know, 20% plus in volume. That's kind of worked itself through. We're cleared with the inventory. A lot of everything you're shipping today is DDR5. There's no inventory concerns. I mean, is that a fair assessment?

Yeah. I think that's a good way. You know, DDR4 certainly was a headwind for us. I think I've said on the past couple of calls that DDR4 is really at minimal levels now, and I would really define that as less than sort of $5 million. My expectation is that that will continue to stay in the model maybe for the next sort of four or six quarters. But we're not gonna see any sort of large swings on the DDR4 side. In terms of DDR5, we continue to see the inventory being lean in the sort of channels from there. That's really a function of the sub-generations that Luc talked about earlier being coming around sort of every sort of 12 months. Customers are expecting us to hold more inventory on our sort of balance sheet to support that.

We certainly have the, you know, cash reserves and balance sheet to support that, going forward. But yeah, the DDR4 headwinds are behind us.

Perfect. Why don't I pause and see if there's any questions from the audience? If not, I'll keep going on. Go ahead.

Yep.

So maybe I'll repeat it just for the audience. Is this, is AI servers slowing down the adoption of DDR5? Maybe we can unpack that a little bit. Fair?

Luc Seraphin
President and CEO, Rambus

At a high level, no. Actually, if you look at an AI server, you have, you know, the GPUs and HBM ranks, you know, to do the number crunching. But in every AI server, you actually have standard servers that use standard DIMMs. And actually, I would say, maybe not from a volume standpoint, but from a tailwind standpoint, it was a catalyst to the adoption of DDR5 because the type of standard servers that you need to have in an AI box have to be high-end with high capacity, high bandwidth. And this has been a catalyst to the adoption of DDR5. So that, that's been really good for, you know, for the market and for us.

Any other questions? I think that's actually. I'm gonna double-click on that question 'cause I think it's a really good point because to me, I think there's been a perception that NVIDIA momentum, HBM, you know, six, eight ranks, stacks of HBM sitting around the GPU complex. You know, I'd also add in LPDDR5, right? NVIDIA Grace sitting on side that's soldered down onto the motherboard in that architecture. Seems to be a misplaced perception that that AI server, that's the memory footprint, and that's it. And you guys also participate maybe in HBM from a licensing and IP perspective. Maybe you can touch on that.

Des Lynch
CFO, Rambus

There are several questions here.

Yeah.

On the HBM, I'll start with this. On the HBM, we on the IP side, we've been at the forefront of developing HBM solutions. We announced HBM4 in September of this year and if you look at our history, we were always the first to announce several generations of HBM IP at the highest speeds ahead of the market so HBM is certainly a driver for our silicon IP business. Now, if you come to AI servers, we had a couple of things happening. The first thing that was happening last year was really AI servers were attracting CapEx. You know, everyone was trying to catch up with the development of their, you know, machine learning, you know, solutions.

So a lot of money went from a CapEx standpoint, you know, to AI servers at the expense of, you know, traditional servers. You know, when you have a fixed amount of money to spend and you have to catch up on those, then that's where the money goes. So that created that, you know, sluggish demand for traditional servers. But as I said earlier, even though you know you had this CapEx effect, it was a positive catalyst for the adoption of DDR5 because these AI servers do use, you know, high-end DDR5 solutions. Now, if you go back to the impact on standard servers, that delayed the, you know, usual refresh cycle of standard servers.

Yep.

But we do see that refresh cycle of standard servers happening in the second half of this year. If you look at, you know, our revenue in Q3 and the midpoint of our guidance for Q4 for our buffer chip business, you know, that second half of the year is 30% higher than the first half of the year. It's 30% higher than the same, you know, half last year. So we do see a recovery.

Mm-hmm.

There in the standard server. We did see the positive impact of AI servers on the adoption of DDR5 as a technology. The first half of the year was a soft half, you know, just because of that dynamic of CapEx.

Yeah, and I think you're hearing a little bit of that from.

Right.

From the traditional system vendors. They'll, you know, we'll have a report.

Yeah.

On Thursday night.

Yeah.

That they're seeing some of that recovery, so you guys naturally get pulled along with that, you know, that recovery in traditional servers.

Right. Yeah.

Yeah.

LPDDR, you mentioned LPDDR. You know, LPDDR is a. I would say it's a very specialized solution that is used for, you know, supercomputer types of applications. I think, you know, the company that is promoting this was talking about a handful of large, very large supercomputer applications where, you know, the use of LPDDR in a very specific way is better tailored to these type of applications. But that does not eat anything at all in terms of, you know, share into the, you know, the standard AI servers or traditional servers.

So in theory, I mean, and the difference with that solution is that it again, it's soldered down onto the motherboard.

Yes. It's soldered. It's LPDDR. It's sold as a complete system. But the software architecture on those systems is also very different. So it's a very different type of use case. LPDDR offers advantages for that very specific use case.

Yep.

But it's not something that can, I would say, go into standard servers precisely because it's soldered. You know, there's a lot of advantages by using standard DIMMs. You know, the BIOS software architecture that you have in old servers can be reused from generation to generation. And also in a standard DIMM, there's an aspect called, you know, ECC or, you know, the reliability, the redundancy, you know, on those is really important in standard servers. You don't have this in LPDDR.

So I got 20 seconds left, and I'm just gonna ask this question. Is there anything, you know, that I didn't ask you that I should have asked you, or is there a key topic, one or two topics that you think, you know, investors just don't really understand? Just to throw it out there.

No. I think we've covered most of the things. You know, we believe memory architectures are becoming more and more important, whether it's on the data center side and in the client side in the future. I think these new technologies, in particular, PMIC, are going to be more and more important and more critical to that. I think you've covered most of the things, that.

Perfect.

That we need to cover. Yeah.

Perfect.

Thank you.

Luc, Des, thank you so much.

Yeah.

Appreciate it.

No problem. Thank you.

Thank you.

Thank you.

Powered by