Astera Labs, Inc. (ALAB)
NASDAQ: ALAB · Real-Time Price · USD
202.68
+7.94 (4.08%)
At close: May 1, 2026, 4:00 PM EDT
203.10
+0.42 (0.21%)
After-hours: May 1, 2026, 7:59 PM EDT
← View all transcripts

28th Annual Needham Growth Conference Virtual

Jan 14, 2026

Quinn Bolton
Semiconductor Analyst, Needham

Back, everybody, to the second day of Needham's 28th Annual Growth Conference. My name is Quinn Bolton. I'm the semiconductor analyst for Needham. It's my pleasure to host this fireside chat with Astera Labs, founded in 2017 and headquartered in San Jose, California. Astera Labs provides rack-scale AI infrastructure through purpose-built connectivity solutions. The company's intelligent connectivity platform integrates CXL, Ethernet, NVLink, PCIe, and UALink semiconductor-based technologies with the company's Cosmos software suite to unify diverse components into cohesive, flexible systems that deliver end-to-end scale-up and scale-out connectivity. Joining me on stage from the company are Jitendra Mohan, Co-founder and CEO, and Nick Aberle, VP, Treasurer and Investor Relations. Jitendra and Nick, thank you for joining us.

Jitendra Mohan
Co-founder and CEO, Astera Labs

Thank you. I couldn't have said that better myself.

Quinn Bolton
Semiconductor Analyst, Needham

Thank you. Just kind of starting off with some big-picture questions. Astera is a sort of leading play on rack-level AI infrastructure, which we call Infrastructure 2.0. It's a brief introduction for investors that may be less familiar with the company. Can you provide an overview of the various products that Astera supplies to enable Infrastructure 2.0?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, certainly. I see many familiar faces and actually many new ones. So happy new year, everyone. Maybe what I'll do is I'll start on the other side, what our vision is, and then we'll translate that back into the products. So the vision that we have at Astera Labs has been this way ever since we founded the company, is to really provide all of the connectivity infrastructure that our customers need. What has changed between 2017 when the company was founded to now is that the basic unit of compute has become a rack. So back in 2024, you would have a server, maybe the shelf, that housed eight GPUs, and that was the workhorse of the AI industry, actually still is today. But moving forward, that has changed to a full rack.

At Astera, we want to provide the full connectivity infrastructure that goes into this rack. That consists of switches that connect all of these GPUs together, typically called a scale-up switch. It is all of the signal conditioning components like retimers, gearboxes, active electrical cables that aid with the connectivity, as well as the software that ties it all together. So for us, that translates into multiple different products, both in hardware side as well as software. On the hardware, we have our Scorpio P and X family of fabric switches. These are Scorpio P families responsible for PCI Express connectivity, typically used in scale-out applications. Scorpio X is responsible for GPU to GPU scale-up connectivity. That's used for building scale-up networks.

And then we have our Aries retimers that are deployed either as a chip down on a board or as an active electrical cable, both for scale-out as well as for scale-up applications. Then we have our Taurus products, which are also for signal conditioning, but for Ethernet. And these are typically deployed as active electrical cables, where Astera provides a smart cable module. And last but not the least, we have our Leo products, which are a little bit different in that they address the memory bottleneck in these AI systems, allowing you to add DDR5 memory with CXL connectivity, starting with general-purpose compute applications, but then eventually also making its way to AI. At this point, I'm very happy to say that all of these product lines are in production and contributing meaningfully to our revenue.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. And we'll get into a lot of those products in a moment. But when you look at your product portfolio, you have PCIe and UA-Link fabric switches. You have the signal conditioning products. What are the company's core competencies that sort of tie all these products together?

Jitendra Mohan
Co-founder and CEO, Astera Labs

So I would say there are maybe three things. So first of all is our architecture. So we came up with this architecture, which is a software-first architecture, where we try to do whatever we possibly can in software or in firmware. So there are many, many microcontrollers that are embedded controllers that are in our chips that are doing a lot of the processing. And the advantage that that gives us is the solution becomes extremely flexible. So as the needs of AI change, you get a new workload, we can morph our solutions to address that optimally. We can customize this solution to our end customer's requirements because everybody tries to do their own scale-up networks or their own infrastructure development a little bit differently. And so we have the ability to optimize and customize this for our end customers.

As a result of this flexible architecture, we also have tons of diagnostics, so if you think about somebody who's deploying hundreds of thousands, millions of these GPUs and XPUs together, they want to make sure that their infrastructure continues to hum together very well, and for that, we need to create a lot of or produce a lot of diagnostics information, the health of the links, the health of the chips, the health of the systems, and we tie this all together, and this is the second component, with our Cosmos framework, so Cosmos is a software framework that ties all of our chips together, allows our customers to customize the solution, optimize the solution, collect all of the diagnostics, and really make their infrastructure hum smoothly. The third component, which I think is probably the most significant one, is not technical at all.

It is really the trust that we have developed with our end customers over the last seven, eight years, where they are willing to share with us what their roadmap is, what solutions they want to bring to the market one year, two years, three years out. And that gives us a very significant leg up in how we define our own roadmap, what are the features that we put into our products, what are the features that we don't put into our products. Oftentimes, that's even more important. And certainly, what products to build and at what time frame to build that in. So I'm really proud of the relationships that we've built with our lead hyperscaler customers.

Quinn Bolton
Semiconductor Analyst, Needham

That's a great overview. So you've got your software-defined hardware architecture. You've got the Cosmos software suite. You have the customer trust. All of that has enabled you to build sort of a competitive moat. Where do you think you have the greatest lead across the product portfolio relative to some of your competitors?

Jitendra Mohan
Co-founder and CEO, Astera Labs

That's a good question. And I would say that our presence in different segments is different, but we do aspire to have a leadership position in all of them. Maybe the greatest lead today, just from a volume or market share standpoint, is in the retimers. We are the de facto choice when it comes to PCIe retimers. Very rapidly, we are following that up with the PCIe and switches, Scorpio P and Scorpio X families, where we were not the dominant player at PCIe Gen 5 family. Broadcom was actually the dominant player. But we are well on our way to establish that leadership position with PCIe 6. We were the first ones to introduce a PCIe 6 switching family. We were the first ones to introduce a PCIe retimer family.

And in fact, if you think about PCIe itself, that is where we are the strongest with a full portfolio from retimers to gearboxes to cables to switches. So if a customer wants to deploy a PCIe-based, not necessarily PCI, but PCIe-based solution, then Astera is the one-stop shop for that. And we'll translate this into UA-Link in the future, where all of the hardware components, software components will run faster with UA-Link. But we aspire to do that with UA-Link as well. Ethernet is perhaps where we have a less dominant position. Ethernet has always been a share game. Customers like to have multiple sources. And so we compete for share with other companies in the Ethernet space.

Quinn Bolton
Semiconductor Analyst, Needham

Great. Last sort of question just about the sort of AI spending environment. There's been some concerns about NVIDIA and circularity of investments. There's been some concerns about increasing the amount of debt to fund CapEx investments. Just what are you seeing from your customers, their spending plans, their roadmap plans? Do you expect continued growth, strong growth in 2026? Are you seeing any sort of clouds on the horizon?

Jitendra Mohan
Co-founder and CEO, Astera Labs

So first of all, these circular investments are not my area of expertise. My head also spins, but fortunately, we don't have to worry about it as much because I think what I am very confident of is just as an end customer myself of Gemini and ChatGPT and Meta Llama and all of the new things that are coming up. I think the end customer demand is very strong, so who funds what? Vendor-backed financing has always been a thing in the past as well. The reason it failed in the internet era was there was no end demand. Everybody was just sort of fueling this. Here, the fundamental difference is the end demand, I think, is very strong. Everybody who's deploying these AI systems is saying that, hey, there is ROI. They are getting returns on their investment, and we see that translated into our order pattern as well.

So we don't see any evidence of a slowdown in 2026 or in 2027.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. Wanted to move now to the products. We'll start with Scorpio X, as it will become, I think, over the next 12 or so months, your biggest product line. Last month, AWS hosted its re:Invent show and made a couple of key announcements that I think affect the company. First, they announced the Trainium3 platforms were moved to a PCIe-based switch fabric topology in the scale-up network. And then second, as they look forward to their next-generation platform, they talked about migration of scale-up to UA-Link and NVLink. So how do you see these announcements at a large customer affecting your business?

Jitendra Mohan
Co-founder and CEO, Astera Labs

These are all very positive for us. None of these are any surprises. We've known about this for some time. But of course, we can't talk about it until our customers make the announcement. So as I mentioned earlier, on the PCI Express side, we have a full portfolio. We are very closely engaged with our customers. So we see that as really our kind of backyard in terms of the benefits to us. So long as customers are deploying PCI Express-based scale-up, and while AWS announced that at re:Invent, there are other customers, many other customers, actually. We mentioned 10-plus engagements in that space. So really excited about the potential for Scorpio X. And you are correct, Quinn, that eventually it will become, it is on track to become our largest product line, overtaking both Scorpio P as well as the retimers over time.

The other advantage, the announcements that were important was this is the first public admission by AWS for supporting UA-Link. So that is both good for us at Astera, as well as good for the industry for UA-Link deployments as a follow-on standard to PCI Express. We do think that customers who are currently looking at PCI Express and PCI Express-based scale-up networks will over time transition to UA-Link. So I think that is great. NVLink Fusion was a new announcement that caused a little bit of a stir in the market to begin with. So NVLink Fusion is where you are able to use the nice ecosystem that NVIDIA has curated with the NVL72, including the power components and the liquid cooling and all the nuts and bolts and everything, and the NV switches in particular, and use that with your own compute array.

So, not only does it present a new opportunity for Astera. Previously, NVLink was really off-limits. Now we have an ability to build a solution that attaches one-to-one with XPU. For every XPU, you need the solution. This is a new TAM. So we are very excited about it. We have one hyperscaler customer that's deploying it. Hopefully, we'll see other customers deploy it as well over time.

Quinn Bolton
Semiconductor Analyst, Needham

Is that solution something that takes UA-Link and converts it, does a protocol conversion to NVLink? Is it more you'll have a native NVLink system and you just have chips that can go into providing that NVLink connectivity?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah. So the way this works is XPUs talk in their native language, whatever that may be. Maybe it's PCI Express, maybe it's UA-Link, or maybe it's something else. But the NVL72 infrastructure talks NVLink. So you need to do a protocol translation. And it is not like a retimer type product that a certain data comes in and the same data goes out. There is a lot of complex data flows that need to be managed, security that needs to be managed, links that need to be managed. So it ends up becoming a complex solution where we can command a healthy ASP for it. And with the fact that it has a higher attach rate than switches, in terms of the dollar opportunity, it becomes very similar to us in both cases.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. OK, great. The other announcement was Marvell's recent acquisition of Celestial AI, which is one of the startups in the CPO segment. And with that acquisition, Marvell and Celestial named Amazon as their lead customer for CPO solutions. And so as you think forward to a future system that integrates CPO both on the XPU and the switch fabric, how does that affect your business? Would you look to come in with CPO capabilities yourself? Would it be a chiplet design where you can do the switch fabric and then have chiplets that provide the connectivity off-chip? How should investors be thinking about adoption of CPO in future fabrics?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah. So at the highest level, CPO is a net increase in TAM. CPO optical solutions are a lot more expensive than copper solutions. So as the world shifts to optical, it is good for anybody who's playing in the optical solution, and we'll absolutely be playing in there as well. That is also a reason why I believe that our customers don't want to go to optics because they have to pay a lot more for optics, including in power and in cost. So overall, it is up to the industry, including folks like us, to deliver a reliable, low-cost solution that enables optics and therefore allows disaggregation of these racks, and we are already actually in close engagement with our customers on when this transition is supposed to happen, when they would like to deploy copper, sorry, optical, and we will be right there.

Now, to your second point, you cannot deploy optics in isolation. So you cannot say, I have an optical engine, and now you can deploy optics. You must have a switch that's capable of talking optics, and you should have an XPU that's capable of talking optics. And it is our plan to work on an optical engine, to create an optical engine that will enable Scorpio family to have optics I/O as an addition to a copper-based I/O. We announced the acquisition of Xscape, which is a company that does the packaging piece, which we believe is the most critical piece to scale optics. This allows very efficient connection of optical fibers into the silicon photonics chip. Now, unlike Celestial, for example, where they have chosen to go down a particular path of silicon photonics, we are open to multiple solutions.

If you need to work with, let's say, a Celestial type of a solution, we can happily work with that. We will have our own silicon photonics solution offering that if a customer wants to use that, they can. Or if they want us to work with a third party, we will work with anybody whose silicon photonics our customer wants to use. And then there is the, so you have packaging, then you have silicon photonics, and then you have an electrical IC, I think what you mentioned, who has chiplet. So this chiplet is also very important because it has to understand what the XPU is talking.

And this is where the analogy with NVLink Fusion becomes important, that if you already understand what the XPU is talking, it is easier for you to produce this chiplet, which understands the XPU on one side and can produce the electrical signals to drive the silicon photonics. So the summary is that when the industry is ready for deploying CPO, Astera will have our own solution. But at the same time, we are also open to working with what our customers want to do in terms of which silicon photonics they choose.

Quinn Bolton
Semiconductor Analyst, Needham

There's been a lot of investor questions about CPO over the past couple of months. Where do you think, well, I should say, where and when do you think that the market begins to adopt CPO solutions? Do you think it is adopted in scale-out networks or scale-up networks? And is it a couple of years away? Is it potentially further out? Just any big picture thoughts without trying to specify any.

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, yeah. This actually came up, I think, about like six months ago when I was in one of these conferences and somebody predicted the demise of copper. So first of all, it's not going to happen. We do think, like you said, that scale-out is the first place where CPO gets deployed. And you can see that happen with the demonstrations that Broadcom and NVIDIA have done with their respective switches. And there is a definite advantage to that because the alternate there is pluggable optics. So compared to pluggable optics, CPO provides all kinds of advantages. Compared to copper, it does not. The advantage that it provides, of course, is longer reach. So we do think that scale-out is where optics get deployed first. Whether that happens in 2027 or 2028 or 2029 remains to be seen.

But if you look at optics for scale-up, that one we believe is our 2028, 2029 proposition. And part of it is also dependent upon the industry. If, like I said earlier, if you can come up with an optical solution that's as cost-effective, as reliable, and low-power as copper, then maybe that transition becomes faster.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. You'd mentioned that the 10 engagements or over 10 engagements for PCIe scale-up switches. How are those engagements progressing? Are you sort of more or less bullish on the PCIe scale-up opportunity than, say, you were six months ago?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Oh, definitely more bullish. The amount of traction that we have for the Scorpio X family is just through the roof. I have not seen anything like this in my whatever roughly 30-year career. We mentioned 10-plus engagements. AWS talked about their move to switch to PCI-based solutions for switching, and there are many others that are exploring this because, frankly, PCI Express is a very good protocol. It is designed to be low latency. It is designed to provide memory semantics. It is a full open standard. You can get components from multiple vendors. So a lot of the customers are working on it. Some of that is a confirmed design win for us, and actually ramping in Q1, starting to ramp in Q1, maybe more volumes happen in the back half as the XPUs get deployed.

On the other end of the spectrum, some of them are more in exploration stage, but the amount of excitement that customers have and the traction is just we are not able to keep up.

Quinn Bolton
Semiconductor Analyst, Needham

How long do you think the PCIe switch fabric, scale-up switch fabric lasts? I think there's some perception that this may be a one or a two-year cycle, and then you'll get sort of broad conversion over to either UA-Link or EUC. What's your view? I mean, is PCIe as a switch fabric or as a scale-up fabric likely to stick around for perhaps longer than investors believe?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, PCIe is still growing. I think it'll stick around longer than anybody believes. In fact, the engagements that we already have on our PCI Express portfolio already are ramping through 2027, through 2028. And now customers are starting to talk about PCIe 8 already. So I think PCI Express as a protocol will continue for a long time. And PCI Express as scale-up is already people are committed to 2027 and even in 2028. Having said that, those who are using PCI Express today are likely to turn over to UA-Link as that becomes established. So we think that there will be initial deployments that happen in 2027 and perhaps starting to take speed in the later half of 2027 and in 2028. We also think that folks that are doing PCI Express today will move to UA-Link because they are both memory semantics-based protocols.

So any software optimization that folks have done for PCI Express translates rapidly over to UA-Link. We just get to run that protocol a lot faster and get better performance in their system. And for that reason, those who are using Ethernet today are likely to continue to use Ethernet. Ethernet does have higher latency and a different way of setting up the memory addressing. But if you have optimized your protocol in your XPU software to use RDMA over Ethernet, it is likely that you will stick with Ethernet until both UA-Link and EUC become established, and maybe there is some benefit to switch over. So for the last part, large part, we feel that customers will stick to their swim lanes. And maybe out when both standards are fully established, there might be some people that move over.

Quinn Bolton
Semiconductor Analyst, Needham

On the UA-Link, looking forward to the UA-Link switches, you're starting from a very strong position in PCIe fabric switches. As we look to UA-Link, you're developing UA-Link switches. Your competitor Marvell is also developing UA-Link switches. They recently announced the acquisition of XConn to bolster their switch capabilities, I think both on the Ethernet side as well as the UA-Link side. Just maybe talk about how you feel positioned in UA-Link switch as that, I think both companies, yourselves and Marvell expect to start sampling switches later this year.

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, it's a good validation of what we have believed in and said for a while that scale-up switching is a very big space. And of course, Marvell made the acquisition of XConn to get better footing in the PCIe switching space. They acquired Innovium a while back. So with that, they have Ethernet. And they did announce UA-Link as well. And so did we. Now, I think the difference between the two is that we have been in this space now for some time. We announced our Scorpio P and X families, not just announced, but demonstrated working silicon a year ago at OCP in 2024. So during that time, not only have we taken these devices to production and generated significant revenue, we've actually generated significant learnings of what works and what doesn't work. So PCI Express itself is a complicated protocol.

But when you deploy it for scale-up, there are many learnings and many customizations that we have had to do based on our Cosmos software. And those learnings will stay with us and translate to the next generation of the devices, whether it's in PCI Express or in UA-Link. So I think that's a unique advantage that we have as we deploy the next generation of our Scorpio family for both PCI Express and work on UA-Link. There is another part that is completely separate from the protocol, which is management. So we also understand what type of data and diagnostics information customers look for in order to manage these racks. And that knowledge, again, will carry over from the current generation of Scorpio X devices to the future generations.

Quinn Bolton
Semiconductor Analyst, Needham

Great point to sort of move on to the P-Series PCIe switches. I think the original design was sort of in AI head nodes on sort of custom versions of NVL72 rack. So curious your outlook on sort of P-Series switch on future AI head nodes. It looks like more recently you're starting to see Scorpio P perhaps being used as part of the PCIe fabric switch to help with onboard connectivity between XPUs on a tray and maybe a newer use case that some of us, when you announced the product line, weren't thinking about. But what's your overall outlook for Scorpio P?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, so P in P-Series, P by the way, stands for PCI Express. So P means PCI Express. So any standard PCI application makes use of Scorpio P-Series. X is for scale-up, and it's typically a customized version of the PCI protocol specific to scale-up. So that's the difference between the two. So Scorpio P promises to be even a wider, more diversified user base than Scorpio X. And we are starting to see that. We announced at the last earnings call that we now have a new hyperscaler customer that's deploying P. There are others that are in the hopper as well. So over time, we expect to basically land all of the major customers on the Scorpio P platform. So again, bright future for Scorpio P. Scorpio X is for the customers that are using scale-up. So to answer your question, 2025, Scorpio P was the main revenue contributor.

It'll continue to ramp in 2026, but we will start to see Scorpio X layer in on top of Scorpio P in 2026.

Quinn Bolton
Semiconductor Analyst, Needham

And you're seeing the Scorpio P both across NVIDIA custom NVIDIA platforms as well as hyperscaler ASIC.

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, sorry, I didn't answer that. Yes, absolutely. So I think with complete hindsight, we made a very good decision in developing that particular form factor of Scorpio P, the 64-lane device, which was designed for AI workloads. It doesn't do everything, but it does those AI workloads very well. And it allows you to connect an XPU that might be running, let's say, Gen 6 to the rest of your infrastructure, whether it's SSDs or network controllers and so on, that might still be Gen 5. And we are seeing more use cases across different hyperscaler customers that want to use Scorpio P-Series, that device in this application.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. One is switching out of the Aries PCIe retimers. First, can you speak to the emergence of PCIe Gen 6 devices over the past year or how you're positioned to capitalize on this opportunity relative to the PCIe 5 generation?

Jitendra Mohan
Co-founder and CEO, Astera Labs

So now our PCIe 6 generation is in full production. We introduced the first sort of demonstration of that right around our IPO time in 2024. So just like I talked about on the switches, it has taken us a lot of learnings to go from an initial proof of concept to full production. So I feel very confident about the PCIe Gen 6. Gen 5, we have a leadership position, very strong market share. And I think we are on track to deliver the same thing with the PCIe 6 generation as well.

There are many of our competitors that have announced similar products, but we are yet to see any of them go to full production or have the same robustness that our Aries family has, both by carrying over the learnings from PCIe Gen 5, plus all of the new learnings for PCIe Gen 6 that are now incorporated as part of the PCIe Gen 6 family. As we go from PCIe Gen 5 to PCIe Gen 6, the opportunity sometimes actually is larger because the data rate is twice as high, and so the signals don't quite reach as far as they did with PCIe Gen 5, so we have the opportunity of higher attach as well as higher ASP when we go from PCIe Gen 5 to PCIe Gen 6, so on the whole, our retimer portfolio grew from 2024 to 2025, and it's going to grow again from 2025 to 2026.

Quinn Bolton
Semiconductor Analyst, Needham

One of the concerns we've heard from investors is NVIDIA's NVL reference designs have largely moved away from retimers on reference design. So talk about what are some of the drivers of the PCIe retimer market more broadly. Obviously, the upgrade from 5 to 6 gives you an ASP lift. Shorter distance probably means more retimers in general. But what drives the market going forward?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, so I think there are two answers to that. One is just the signal conditioning answer. There is physics that if you are going to run a faster protocol, you don't go as far. And therefore, you need to add a retimer somewhere in the middle. Sometimes that retimer happens to be on a board. Other times, the retimer is actually in the cable. So just like we have our Taurus cables that are AECs for Ethernet, we have PCI Express-based AECs as well, where we supply the smart cable module and somebody builds the entire cable. So we are seeing a lot of growth in that deployment of PCI 6 retimers. In addition to that, the use cases that we have for scale-out continue to be there as well. So we have that.

When NVIDIA moved to a different architecture on Grace Blackwell, while we lost content in the reference design, we actually gained a lot more content when hyperscalers customized the deployment of Grace Blackwell platform into their own data centers, and they have said that they're going to do the same thing for Vera Rubin, so I think that leg of growth continues as we go from Grace Blackwell into Vera Rubin, but the fact that customers are deploying PCI Express or PCI Express-based protocols for scale-up is perhaps the bigger opportunity for these devices.

Quinn Bolton
Semiconductor Analyst, Needham

Two other questions. Are you starting to see or when do you expect to see maybe broader adoption of PCIe retimers in the general-purpose server market?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah, we already are shipping PCIe retimers into the general-purpose market, but it has not taken off to the same level as AI just because the endpoints are not quite there yet. So if you look at the endpoints, the network cards, the SSDs, they're still PCIe Gen 5. And the server form factor is such that in all cases, you don't need to have signal conditioning. So when the CPUs go to Gen 6, which they are now, and these network cards and SSDs go to Gen 6, which they at some point will, that's when we will see kind of more Gen 6 retimers going into the general-purpose compute side of the market.

Quinn Bolton
Semiconductor Analyst, Needham

Okay. One of the things I think we've seen is it feels like the generations between PCIe Gen 5, 6, to 7, and 8 may be accelerating, which sounds like it could create gearbox opportunities for you. Talk about that and have you seen any of the competitors you talked about not seeing a lot of competition yet from the Montage, the Marvell, Broadcom, or Credo. Do any of those competitors have a gearbox capability built into their retimers?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Again, I'm not aware of that because gearbox is a very special feature that you need because you're translating from PCIe Gen 5, which uses a different protocol to Gen 6, which, even though they are both PCI, there is actually a difference in protocol. So because we have a switch, we have a much deeper understanding in the IP on how to do this gearboxing function and translation, which is what we use for our PCIe 5 to 6 gearboxes. So unless somebody has that switching expertise, it'll be a little bit difficult for them to do the same thing. The other point that you made is also very valid, which is these generations are coming in faster and faster. So you have a need for a gearbox from 5 to 6, well, then when everything goes 6, you don't need this gearbox.

But by then, somebody has gone to Gen 7. So you might need a Gen 6 to Gen 7 gearbox. So while it's a transitional socket, and we fully acknowledge that, you will continue to need this, it seems like, for successive generations.

Quinn Bolton
Semiconductor Analyst, Needham

Great. Wanted to move to the Taurus AEC product line in the second half of 2025. It looks like Taurus was one of your faster-growing products. Maybe just talk about what drove that strength. Was it mostly 400-gig AECs at your lead customer and sort of your thoughts on the transition to 800-gig AECs just more broadly across the industry?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah. So the second half growth, you're correct that it was the fastest-growing product line in Q4 was driven by one customer, which is our lead customer for some of both AI and general-purpose compute applications within the customer. And it was driven by both factors, just the overall increase, just the rising ties. They're deploying more XPUs, and therefore, they need more cables, as well as share gain that we had in that application. So of course, good news. That'll continue to give us benefit for the rest of 2026. What is also more exciting in 2026 is as the industry broadly transitions from 400-gig, where there is limited opportunity for AECs, to 800-gig, where the opportunity becomes a lot wider. So we are engaged with many customers in samples and so on of deploying 800-gig AECs.

This will be a new layer of growth that we'll have for 2026 with 800-gig AECs.

Quinn Bolton
Semiconductor Analyst, Needham

And are you seeing those? When do you think that starts? Is that earlier in the year? Is that later in the year in terms of 800-gig Taurus?

Jitendra Mohan
Co-founder and CEO, Astera Labs

For us? Yeah. So for us, the first half will be spent in qualification, and the volume ramps will come in the second half.

Quinn Bolton
Semiconductor Analyst, Needham

Great. And then maybe just in case some in the room aren't familiar with your go-to-market strategy versus that of Credo. You make the Smart Cable Module. You provide firmware. You don't build the full cable. Credo does vertically integrate and supplies the entire cable. What are the pros and cons of the two approaches?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah. So first of all, we are both Credo and us are going after the same problem statement at the same time that we are addressing in different ways. The Credo approach is that they provide the whole cable, and there are some benefits of that. They can usually run faster because they only need to work with one cable vendor, and they control the whole chain. The advantage of the approach that we have is that we provide the supply chain diversity that our customers require. So when you want to go from 10,000 cables, maybe 100,000 cables, to millions of cables, which is really the demand for some of these larger hyperscalers, you want to have a diversified supply chain. And this is where our solution shines because we build the Smart Cable Module, which is on either end of the cable for those of you not familiar.

But this is built to the exact specification of the hyperscaler. They know all of the components that go in there, including the DSP that comes from Astera, but also all of the power components, the EPROMs, and this and that. And more importantly, all of the security, the firmware upgrade, all of those capabilities are controlled by the hyperscaler. So now the hyperscaler does the matchmaking and says, "Okay, I have qualified this smart cable module from Astera, cable vendor one, cable vendor two. Please build this cable for my application number one." And they might go to cable vendor number three and cable vendor number four for another application. So they get to have full control of their supply chain. There is no margin stacking, and they don't need to requalify the cable for every application. So for the hyperscaler, there is a huge advantage.

As Astera, we service hyperscalers. That's why we evolved with this model. We feel very comfortable and confident that over time, this will be successful.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. Just want to touch quickly on CXL and sort of the memory bandwidth bottlenecks. Can you give us an update on how you see the CXL market developing? Do you still expect to ramp this application for memory expansion in 2026?

Jitendra Mohan
Co-founder and CEO, Astera Labs

Yeah. So CXL, among all of our products, has been much slower to ramp than anything else, actually. So I'll have to admit that. However, I think that 2026 is the year that we see CXL getting deployed. Towards the end of the year, Microsoft published a blog about deploying SAP HANA databases with CXL memory. We had a press release as well with the Microsoft quote. So I think this is one example where CXL really shines in general-purpose compute to enable in-memory databases or large memory applications to run memory that is not bound to the CPU. So as we have said consistently, this is where the first deployment of CXL will happen. Back half of this year, we should start to see this contribute meaningfully in terms of revenues for this product family.

What I'm also excited about is some of the recent explorations that other companies are doing for using CXL in AI applications because memory is also becoming a bottleneck, especially when it comes to inference and long contexts that GPUs need to store. HBM is a very expensive memory to store those contexts in, and so some of our customers are exploring using CXL-based memory for KV cache applications, so again, we're not counting any revenues and such yet, but just purely from a technology standpoint, it is starting to be explored, and who knows, maybe in 2027, that becomes a contributor to revenue as well.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. Just last question for me, just sort of thoughts on capital allocation and future M&A. You did the Xscale Photonics deal. Are you looking at additional tuck-ins where you might be able to accelerate your roadmap?

Nick Aberle
VP, Treasurer, and Head of Investor Relations, Astera Labs

Yeah. I mean, I think based upon everything that Jitendra has just said, there's a tremendous amount of opportunity in the market. And this is being driven by secular trends around just higher XPU volumes, demand for AI, but also increasing complexity and this widening problem statement around connectivity and the need to drive innovation on the connectivity side. So we're seeing evidence of that through our customers and the pull from our customers to drive next-generation platforms, next-generation products. And given that opportunity, we are going to be very aggressive in the market in terms of building our team organically, but then also being very thoughtful and selective around bolting on additional teams, whether it's for IP purposes or just for kind of raw horsepower to start to service some of these opportunities.

So I would say from our perspective, strategically, we'll be looking to drive organic investment as well as M&A, mostly, I would say, from a bolt-on perspective, acqui-hires to bolster the team and really put us in a good position to capture as much of this TAM as we can over the next several years.

Quinn Bolton
Semiconductor Analyst, Needham

Excellent. We've got maybe a minute or two for questions from the audience if anyone has a question. All right. Well, we'll wrap here. Jitendra, Nick, thank you very much for joining us at the Needham Conference. Really appreciate it.

Jitendra Mohan
Co-founder and CEO, Astera Labs

Thank you very much. Thank you, everyone.

Quinn Bolton
Semiconductor Analyst, Needham

Thanks.

Powered by