Astera Labs, Inc. (ALAB)
NASDAQ: ALAB · Real-Time Price · USD
202.68
+7.94 (4.08%)
At close: May 1, 2026, 4:00 PM EDT
203.10
+0.42 (0.21%)
After-hours: May 1, 2026, 7:59 PM EDT
← View all transcripts

Barclays 23rd Annual Global Technology Conference

Dec 11, 2025

Thomas O'Malley
Director and Equity Research Analys, Barclays

VPAT or OmniConnect? Guess we're starting a little early. This is perfect. All right. Welcome back to the Barclays Global Tech Conference. I'm Tom O'Malley, semi and semi-cap equipment analyst here. Very pleased to have Jitendra Mohan and Mike Tate from Astera Labs. Thank you for being here, guys.

Jitendra Mohan
CEO, Astera Labs

Yeah, absolutely. Thank you.

Thomas O'Malley
Director and Equity Research Analys, Barclays

A thematic that we've started with in a lot of these conversations is just the idea that we're in the early innings of a large AI investment cycle. Maybe talk about. I understand that you are enabling this large trend and maybe not making decisions from a top-down level, but I'd love to kinda get your perspective on where we are in this investment cycle and then maybe just beginning with where Astera is helping these deployments actually come to life.

Jitendra Mohan
CEO, Astera Labs

Yeah, absolutely. So maybe just a quick show of hands. How many people drove here this morning? We were driven ourselves by. I have a Tesla. From my garage to the parking lot, it drove all the way with zero intervention. When you say, and yet my wife will not let me have FSD because she's afraid that the one mistake it might make will end up with me dead. I really do think that we are in the early innings of AI as these systems do need to get, you know, measurably better. As—in order to do that, we are gonna need a lot more compute to make these models, you know, near perfect, for them to be super useful in our daily lives.

As a consumer, I can say that when I enter a search query into Google that I've been using for decades now, I'm no longer asking it a search question. I'm asking it an AI question and expect to get an AI answer for it, and so this is just going to become second nature to all the consumers. Today we don't pay much for these systems, but I think we are gonna get hooked, addicted, and we will pay money for all of these systems, so I think there is a lot more runway here. We are truly in the early innings. In order to make this truly successful, the amount of compute will need to go up.

As the amount of compute goes up, the level of connectivity that is required to have this compute talk to each other is also going to go up. That is definitely a big boon for us. Since we started Astera, we've been helping GPUs and XPUs and CPUs all connect to each other. I would even go out and say that the connectivity has become, you know, a bigger problem that needs to be solved to drive efficiency in these systems. We are really looking forward to what is to come. The orders that we have from our customers don't seem to show any sign of slowdown. If there is any talk of AI slowdown, etc., we are not seeing any of it in our business.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Super helpful. So I know you addressed a lot of different verticals in AI. We have cables, retimers, the areas that we spend more time on. But I kinda wanna start in reverse order today because something you just mentioned where we have all these deployments, a lot of the questions that come up today are, how are we able to get these in the market? And one of the areas that comes up again and again is memory, right? And I think on the last call, really the first time since the IPO in a more significant way, you talked about CXL and a partnership that you have with Microsoft. So.

Jitendra Mohan
CEO, Astera Labs

Yeah.

Thomas O'Malley
Director and Equity Research Analys, Barclays

I think it's really useful. Maybe spend a little time talking about what you're doing in memory in particular and what CXL does, and then maybe a little bit about the partnership that you talked about.

Jitendra Mohan
CEO, Astera Labs

Yeah, absolutely. So, you know, ever since our IPO, we've been talking about that Astera is all about solving the connectivity challenge and improving the efficiency of connectivity infrastructure, especially as it relates to AI. We have done that very successfully with, you know, call it interconnect products, which are the retimers and the switches and so on that we talk about a lot. But that addresses the data bottleneck and the networking bottleneck with PCI Express, Ethernet, and soon UAL, etc. But there is also the memory bottleneck that you point out. As these models become larger and larger, it is becoming very difficult to fit them into the HBM memory that these GPUs have.

So today the solution is, you know, you just deploy more GPUs so that you have, collectively the right amount of HBM, and then you talk amongst these GPUs extremely fast. And of course, we have benefited from that trend. But as these models continue to go into trillions of parameters, I think when we went IPO, we were talking about, about 1 trillion. And now I think the largest model is 3 point something trillion. It's just insane. So there will have to be some more ways to drive efficiency into the system. And one of them is to address this memory bottleneck, which CXL can do in a very unique way with both, KV cache offloads. That's really a very key part of any inference, workload, as well as checkpointing.

So those are the two workloads or the use cases that we see in AI applications where CXL is beneficial. Having said that, the first implementation or first deployment of CXL will be in general purpose compute, as we have been saying for a while. And the press release that we had and the announcement from Microsoft is a very important next step in that journey where we are now deployed in the data centers in a private setting, of course, for now, to accelerate database workloads for SAP HANA. And it's very easy to see the performance benefits that you get when you have more memory and you can house the entire database in memory. And that's what we are delivering. That's what our customers are seeing.

We are very excited to see the ramp of that in the later half of next year.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Ultimately, is the goal as we move more to AI workloads, is it the physical design and the layout of the chip where you're able to expand beachfront with products, or is it making memory that is associated with one accelerator look like memory that is a pool that all the accelerators can look at? Like, where is the ultimate end goal here?

Jitendra Mohan
CEO, Astera Labs

Yeah. So actually two very important points that you touched upon. I'll take the second one first, which is, you know, how does having additional memory help? So the use cases that we see, in the AI context, CXL being deployed is as a second tier of memory. So everybody will have the right level of HBM because that is by far the fastest memory that is accessible to the GPU. So long as more HBM is available, people will keep using it. But that is just not enough. And so what you will start to see is a second tier of memory over CXL, or even some other protocols in the future that software can make use of. And the additional latency of this memory can be hidden in the AI workloads.

So that's where we will see CXL play a role, or second tier memory play a role. It does require more software work, which is why this rollout has been slower. But we do see over time this will happen. Now, the second thing that you touched upon is actually a very, very important trend about the beachfront. So as these GPUs are becoming and XPUs are becoming larger and larger, our customers are, they don't want to put any of the I/O on a reticle size limited die. So they want to use all of that die for compute because that's really what they are paid for. And this is, you know, what you need to solve the problem. But that whole compute is useless if you cannot plug in enough data, either data or memory.

So that becomes a very interesting place for us to provide I/O chiplets, which take either copper connectivity or even in the future optical connectivity to make sure this compute, this insanely fast compute, is kept occupied, again, delivering higher efficiency in the overall system. So I'm very excited about both of these trends.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Very, very interesting. All right. Switching to another vertical. So in Taurus, we're hearing a lot in the AEC market in general today. We had a Golden Cable initiative that got launched earlier this week. Another one of your competitors does the full cable solution. You guys kinda sit in between just the single chip and the full solution. Maybe talk about your strategy there. And you don't break it out specifically by segment in your earnings calls, but you've definitely talked more positively about that trend of late. Maybe talk about where you are today from a customer perspective and kind of what you're looking for in Taurus in the next year or so.

Jitendra Mohan
CEO, Astera Labs

All dollar questions go to Mike.

Mike Tate
CFO, Astera Labs

Yeah. On the revenue side, we are seeing a nice step up in Taurus. We highlighted in our last earnings call that Taurus will be our biggest driver of growth in Q4. We're ramping into 400 gig solutions with our lead hyperscaler customer. And then we're also moving into 800 gig. We're working with a customer, broadening our customer set with these designs. And we'll see those start to deploy early next year, but much more material ramps in the second half as it layers on top of the 400 gig, which we do expect to grow in 2026.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Do you think that when you get to kind of steady state in the back half of next year, there's a chance that that 800 gig business can cross over the existing 400 gig business, or is that something that just takes time?

Mike Tate
CFO, Astera Labs

We see the 400 gig continue to grow. So it's because of that, we still see 400 gig be more significant. But, as we exit the year, you know, it'll the 800 gig will be starting to play out.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Perfect. Okay. Let's move to the Aries platform, so in a similar vein, you guys address solutions in the 36 and the 72, both with retimers and then also with PCIe switches and cables. That's across both your custom silicon and the Aries 36/72. Maybe let's just spend some time on the retimer business at first. Very early on with NVIDIA, that was very strong. It's the business you went public on. As you move to more system solutions, I think it's well understood you use more NVLink. You still have some content there, but it's not as big as it was historically. Can you just talk about, like, the cadence of the Aries retimers specifically? 'Cause I know, one, you have a large customer that is maybe not using as much, but then you also have custom silicon customers, and you have a refresh of PCIe that's coming.

How to balance those two and look at that business over the next year or so?

Mike Tate
CFO, Astera Labs

Yeah. When we went public last year in, you know, in March of 2024, predominantly most of our retimer business was on the NVIDIA Hopper platform.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Mike Tate
CFO, Astera Labs

As we made it through 2024, we saw the emergence of these XPUs. These are the hyperscalers doing their own ASICs, and that was initially for scale out, and then we started getting scale up. So we're getting retimers for both. On scale up, we get a lot of times the AEC PCIe, so you get a higher ASP with the FCM models. So as you go into 2025, we've seen tremendous growth from these XPUs driving our Aries growth, while at the same time, the Blackwell transition from Hopper, those revenues became more switching revenues versus, you know, they still have retimers, but much more heavy on the switching side, so as we go now into 2026, we're seeing the transition from Gen 5 retimers to Gen 6. That will create another leg of growth for us.

But we're, you know, even that being said, we still have a lot of growth on the Gen 5 on these hyperscaler deployments.

Jitendra Mohan
CEO, Astera Labs

I think.

Mike Tate
CFO, Astera Labs

Yes.

Go ahead.

Jitendra Mohan
CEO and Co-Founder, Astera Labs

If I just add one other thing. I think the other, one reason why Gen 6 is still limited is just the availability of Gen 6 GPUs and XPUs. Right now, there is only one vendor, you know, one merchant GPU that is out, and we are already ramping very significantly with them.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

Later on this year, we do expect the, you know, deployment of other Gen 6 capable GPUs and XPUs. So, you know, the Aries retimer business is looking good for us.

Thomas O'Malley
Director and Equity Research Analys, Barclays

A way that investors, I think, look at your connect opportunity over time is looking at pieces of custom silicon that are using either PCIe scale up in the back end or UAL eventually in the back end. In the course of the last couple of earnings calls, you guys have talked about a customer set, at least in PCIe, that has expanded. I believe you said at one point to low double digit customers. Can you talk about, like, what type of engagement those are? 'Cause you would imagine if you're designing some sort of scale up network, it's a long process. It's 18 months at a minimum between ASIC design to deployment.

Jitendra Mohan
CEO, Astera Labs

Yeah.

Thomas O'Malley
Director and Equity Research Analys, Barclays

What are those conversations like? Maybe any help on where those customers are? Are they large guys, NeoClouds? Anything you could give would be helpful.

Jitendra Mohan
CEO, Astera Labs

Yeah. So we mentioned, I forget last call or the call before that, that we are engaged with 10 plus customers on our Scorpio X family.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

for scale up using PCI Express or PCI Express like protocol. And the, you know, the engagements are in all different stages from confirmed designs, that customers are going to ramp in the coming year. We've talked about the Scorpio X family, you know, in initial volume this quarter and ramping in the first half of next quarter and then even more meaningfully in the second half. So very excited about that. Qualcomm actually recently, since our announcement, went out publicly saying that they support PCI Express, as a scale up protocol. And the reason for that is, you know, anybody who designs their compute system and coming from a compute centric mindset picks a load store based memory semantic based protocol, which is NVLink is a load store memory semantic based protocol. PCI Express is a memory semantic based protocol.

And UALink will also be a memory semantic based protocol. So it's not a surprise that when people decided to pick a particular protocol for scale up, they gravitated towards open standard, which is PCI Express. And like you said, many of them we believe will just move over to UALink.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

Because their software stack is already optimized for this method of communication, and when you go from PCI Express to UALink, it is just the same thing but runs a lot faster. In terms of the engagement itself, you know, we have ramps coming up. We have other confirmed designs. We also have many customers who are exploring using the PCI Express. A full swath of opportunities and engagement levels. But the level of engagement that I've seen on these products, on the Scorpio family, Scorpio X in particular, is the highest that I've ever seen in the 30 years that I've been in this industry.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

So we're very excited.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Yeah. I wanna go to the UALink switch, but I wanna stick on the PCIe version of the X switch first. So, if you look at where that content is supposed to come in, you've said in calendar year 2026, I believe you said in the back half as well, there's a lead customer that people are very familiar with, but you're expected to kind of ramp along with that.

When we went to re:Invent or heard from re:Invent that Trainium was working with NVLink Fusion, I think the first reaction from a lot of investors was, "Oh no, does this mean that the future of PCIe scale up is at risk?" We've had a chance to talk in between, but I'd love for you guys to have an opportunity here to talk about why does that actually benefit you guys that there are multiple SKUs versus it being a problem?

Jitendra Mohan
CEO, Astera Labs

Yeah, so we've been able to see a couple of different things. First, just to go back to what you said before. In order for somebody to deploy a new scale up, they have to have made that decision, like, two years ago.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

So the opportunity that we are currently tracking with our PCI Express Scorpio X portfolio will continue to ramp through next year, 2026, through 2027. So any new technology, whether it's NVLink Fusion or even UALink, is unlikely to become dominant until, you know, sometime after that. You know, they will all start to show up in 2027, part year and then become dominant, hopefully, the years after. Now talking about NVLink Fusion and the different optionality that some of our customers want to have with their XPUs, I mean, it is only natural to see in this kind of, you know, power and vapor constrained world that people would want to have more optionality. And the AI dynamics keep changing every three months. So I do wanna, you know, first of all, give you the short summary.

The Astera Labs content is going, growing higher with every generation of GPU and every generation of XPU. There are always, you know, calls that get made on, "Hey, no, no, you know, the world is falling. People are going from Hopper to Blackwell," but as we have shown repeatedly, we are very tightly engaged with our customers, strong, strong collaboration. And we are very confident that our, content goes up. Now, talking specifically about the Trainium 4 announcement, they talked about two different types of deployments. One, let's call it a native rack, where Trainium talks its native protocol and goes directly to a switch using, you know, NVLink or UALink, as they announced, which is actually a very positive, development for UALink. And that is going to be the highest performance, highest performance, system because there is nothing else in, in the middle.

You go directly from Trainium to the switch and, you know, lowest latency, lowest power, and whatnot. With the NVLink Fusion, though, another option opens up where now you can add another component. And this is a fairly complex component, which takes the input from an XPU, Trainium in this case as an example, and converts that into NVLink because NVLink is not the protocol that XPUs talk natively. So this component has to do protocol translation, manage the data flows, manage security, manage a few other things, and then convert it into NVLink. And the advantage to the hyperscaler of using NVLink Fusion is they can use all of the work that has gone into the NVL72 rack, which is immense.

not just the NV switches and the NVLink protocol itself, but all of the liquid cooling, you know, power supply, the entire supply chain can be leveraged. The price that you have to pay is additional latency in this component, additional power dissipation, and certainly additional cost.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

So exactly how an end customer will deploy a native rack or call it this hybrid rack remains to be seen. But on the whole, our content goes up, because, in this system where you're doing a translation of protocol from one to the other, your attach goes on a per XPU basis. So you're taking a complex component and attaching it to every single XPU. Whereas in a switch scenario, you typically have one switch that attaches to multiple XPUs. So we are very excited about this announcement. And actually, we are more excited about the trust that both the hyperscaler customer and NVIDIA have placed in us to be able to develop this critical component.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Something that I've thought about, and I'm unsure at this point, and maybe you have a good answer here, is if I look at when NVLink Fusion was first announced, there were side-by-side comparisons showing either a CPU or an XPU in the scale up architecture. And I would imagine connectivity would scale with you guys as form factors, either a CPU or an XPU, come into the NVLink world. Is it possible for customers to potentially do both of those? And would that also be a connectivity uplift for you guys at the same time?

Jitendra Mohan
CEO, Astera Labs

Yeah. This is changing. I mean, the AI world is changing so rapidly, it's hard to keep up. This particular announcement for Trainium was to use NVLink Fusion for scale up.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Yeah.

Jitendra Mohan
CEO, Astera Labs

This is not the link from the XPU to the CPU. This is where our content is. When the link goes from the XPU to the CPU, we don't know yet what will happen.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

In the past, we had a retimer content there, as you know, in the Hopper generation. In the Grace Blackwell generation, that content went away.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

It really depends upon how the customer decides to architect their system. If they're able to put the CPU very close to the GPU, then there is no need for a retimer. But if the GPU is further away from the CPU, then you do need a single conditioning component. And of course, our retimers are an excellent choice that customers leverage.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Okay. Helpful. Back to the X switch on the UAL front. Marvell at their earnings talked about a UAL switch. The integration point for you in the market seemingly is Helios in the 500 generation. Can we talk about timing of when we're expecting to see that in market? And then also, does it matter that competitors are there first? Where are you competitively?

Jitendra Mohan
CEO, Astera Labs

I mean, right now, there is nobody has a UALink switch out in the market.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Yes.

Jitendra Mohan
CEO, Astera Labs

It's still early innings, and it's greenfield deployment. As you know, Astera's philosophy is to talk about products when we have them and we can show them working at a trade show or what have you, so that's likely the strategy that we will follow for UALink as well. But you know, as has been discussed before, UALink is a really important development for the industry and a fantastic, fantastic opportunity for Astera Labs, so we will take all of our, you know, combined knowledge that we've built by being in scale up. See, when you are trying to build a scale up switch, there are two things that are really important. Everybody talks about the protocol, whether it's, you know, NVLink or PCI or UALink or even Ethernet, which is, of course, an essential component of the switch.

But there's another part, which is how do you make this switch operate seamlessly, reliably, with the most amount of efficiency in a scale up system? Keeping these, you know, complex racks up and running, operationally is a huge challenge. And by being in PCI Express scale up topology, we've learned a whole lot on what else is needed outside of the protocol. And our customers have deployed our Cosmos software in their operational stacks to configure these devices, get telemetry information, and make sure that the fleet operations run smoothly. And all of that we'll carry over to UALink. So we believe, we are very confident, we are very confident in our ability to execute towards these UALink switches. We think that you will start to see some of them towards the second half of 2026, for pipe cleaning and so on, samples, etc.

and the full qualification will happen in 2027 and then ramp in SKUs from after that.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Very helpful. When you look at the world, I think many of us think in ones and zeros and black and white in terms of how deployments will occur. Either the world is all going to UAL or the world is all going to Ethernet. It's, you know, this is the end. When you look at the end state, or at least what we can think of as an end state in AI for the next 5-10 years.

Jitendra Mohan
CEO, Astera Labs

Yeah.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Do you think that this is a fractured world where certain guys are using scale up Ethernet, certain guys are using UAL, and it's really just whoever's married to what protocol? Or do you think that we go entirely in one direction?

Jitendra Mohan
CEO, Astera Labs

Yeah. So, you know, you'll get many different opinions to this question.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

Everybody's gonna talk their playbook. I'll try, while I'm biased, I'll try to be as objective as I can be. The way we see it is the latter, which is gonna be a you know fractured world where all of these standards will actually coexist. NVLink will continue to do what you know NVIDIA will continue to do what they do with NVLink. There is gonna be an NVLink ecosystem, which previously we were not a part of. With this NVLink Fusion announcement, we are now part of that. That's you know very exciting incremental time. We can participate in the NVLink you know ecosystem, if you will. There is the PCI Express ecosystem today that, again, we are in pole position with.

As I mentioned earlier, we do believe just because of the amount of work our customers have put into their scale up software, that those who are using PCI Express-based systems and using load store protocols will naturally want to transition over to UALink because it's very similar. It is even a more optimized version of PCI Express and removes the line speed bottleneck that PCI Express currently has. So, you know, they will likely go from PCI Express to UALink. Now, at the same time, the folks that are currently using Ethernet and publicly only Intel has really stated that they're using Ethernet for scale up. But there are others. They have optimized their software to use RDMA over Ethernet. And they are unlikely to change as well. So I think those guys will continue to use RDMA over Ethernet.

Maybe in future, they will go RDMA over Ethernet, who knows?

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

But I do think that these three different camps will continue to coexist out in the future.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Any plans? You obviously are doing Ethernet over cabling right now. There is a large Ethernet switching world, which is dominated by a single player today.

Jitendra Mohan
CEO, Astera Labs

Mm-hmm.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Just given your technology stack and your engagement with hyperscalers, would you ever see your portfolio moving into an Ethernet switching platform?

Jitendra Mohan
CEO, Astera Labs

I think, first of all, I would say never say never.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

You are also correct that the Ethernet side is dominated by one large player. And they make really good switches, you know, for everything else that they might do. They do make very good products. So, at the end of the day, we have to look at each opportunity and see what is our role going to be. What is it that the customers are asking us to do? What differentiation can we bring, from Astera? And what gives us a lasting kind of sustainable, advantage in that space? Today, our customers are asking us to focus on PCI Express and transitioning that to UALink. And that's a massive opportunity for us. So that's where we are focused in.

If things change, we run out of things to do, or customers tell us to go in the Ethernet side, then we will approach that. But we'll approach that carefully because of the large incumbent and really, you know, decades of learning have gone into that product.

Thomas O'Malley
Director and Equity Research Analys, Barclays

NVIDIA is talking more about NVIDIA in next generation. The supply chain is talking late 2026, early 2027. Marvell just bought Celestial, and is talking about the future of scale up, optics. You guys bought XA well, AIX Scale is, I believe, how you pronounce it, in October. It feels like this is your play in that market as well. Could you maybe talk about when you see optics intersecting, scale up architectures? Is it something that you think proliferates from a scale out perspective first and then gets to scale up? Anything on the timing and what that means for your business?

Jitendra Mohan
CEO, Astera Labs

Yeah. I apologize. I have.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Take your time.

Jitendra Mohan
CEO, Astera Labs

Leftover from, you know, COVID from a month ago. Also, it comes about when somebody asks a harder question. I don't know what to say. So, no, I mean, kidding aside, we've been talking about optics now for some time. That it's actually a huge opportunity for us. Any amount of dollars that we get from a copper link, we can significantly increase that with an optical link. So if magically everything could go optical, we would actually love it because it's a huge time expansion and huge opportunity for us. And it is exactly for that reason that our customers don't want to go to optics because you have to pay in power and you have to pay in dollars.

Having said that, as these systems become more complex, data rates go up, there is a desire to expand the scale up domain from, you know, one rack to two, three, four racks. You have no choice but to go to optics. And we are very well engaged with our customers on what that inflection is. It is likely to be in the 2028, 2029 timeframe. But also, imagine you cannot go from zero to full deployment of optics in one shot. So there is gonna be some pipe cleaning deployment that happened next year in 2027 leading up to a bigger deployment in 2028. So with this X scale acquisition, we feel we are very well prepared to intercept what our customers' plans are. If you look at an optical link, its makeup. It is made up of three components.

There is an electrical IC, and of course, electrical runs in our blood. We have no problem building electrical ICs. It does require some foundational technologies for it to be able to talk to the second component, which is photonics. That's a photonics IC. We have been investing in this space, with analog mixed-signal technology that allows us to drive a photonics IC, from our electrical IC. Then there is a very important piece, which is the connector, which takes the light output from the photonics IC and couples it to fibers and then eventually gets it out.

You know, our belief after doing a lot of survey is that this connector is actually a really key piece and a limiter in how you can scale optics from producing millions of EICs or millions of PICs, which is relatively easy to do, but then connect fiber, etc., reliably. We felt that X scale was probably the closest to getting to high volume production of this piece. It would have been very difficult for us to do. We went ahead and acquired them, and on the photonics piece, we also have the capability of building our own photonics, which we will do. We are also very open to working with other players that our customers might point us to.

Just like scale up is a religion in many ways, the choice of photonics is also a very religious argument between customers.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Mm-hmm.

Jitendra Mohan
CEO, Astera Labs

In Astera's philosophy is not to force our customers to do things our way. We would very much enable our customers to keep doing what they want to do. If they come and say, "We want you to use this photonics, couple it with your electrical IC and your packaging solution," we will be very happy to do that.

Thomas O'Malley
Director and Equity Research Analys, Barclays

So Astera Labs has one of the most robust growth trajectories of any company I cover, spitting off a ton of cash, new product introductions. Mike, maybe talk about as these new products come into the fold, what does that do for gross margins? One. And then two, from a capital returns perspective, I know you've been very, very clear about reinvesting in the business given all of these opportunities, but any priorities that you'd like to discuss?

Mike Tate
CFO, Astera Labs

Sure. We've been very clear since we went public that as we broaden our product portfolio and increase our TAM opportunities, that our gross margin model is 70%. So we still believe that is where we're tracking to over the long term. And with that and the growth that we have, that we could deliver 40% operating margins with that. We're overachieving on both sides of that. But as we continue to grow and mature, that is kind of the direction we're headed. As for, you know, cash flow, we're very profitable right now. We're building up a very large balance sheet.

But we, you know, we still think at this early stage of the company just going public last year that we'll continue to look to potentially deploy those for strategic M&A or other strategic initiatives at this point. You know, and as we mature and continue to develop the track record of cash flow positivity, then we'll look to ways to return to investors as well.

Thomas O'Malley
Director and Equity Research Analys, Barclays

Feels like it's been a lot longer than a year. It's been a lot of great stuff from you guys. Appreciate you being here. Thank you so much.

Jitendra Mohan
CEO, Astera Labs

Absolutely. Thank you very much.

Powered by