Astera Labs, Inc. (ALAB)
NASDAQ: ALAB · Real-Time Price · USD
202.68
+7.94 (4.08%)
At close: May 1, 2026, 4:00 PM EDT
203.10
+0.42 (0.21%)
After-hours: May 1, 2026, 7:59 PM EDT
← View all transcripts

27th Annual Needham Growth Conference

Jan 15, 2025

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Okay. We'll go ahead and get started. Welcome, everybody. My name is Quinn Bolton. I'm the semiconductor analyst for Needham & Company. Thank you for joining us here on the second day of Needham's 27th Annual Growth Conference. It's my pleasure to host this fireside chat with Astera Labs. The company was founded in 2017 and is a global leader in purposeful connectivity solutions and software for AI and cloud infrastructure.

Astera went public last March and has seen tremendous growth in both its operating results and stock price since the IPO. Joining me from the company are Jitendra Mohan, CEO and co-founder, Nick Aberle, VP of Treasury and Investor Relations, and Leslie Green, Head of Investor Relations, is in the audience. So, Nick, Jitendra, thank you for joining us. I guess maybe as a starter for folks that may not be as familiar with the company, can you give us a quick overview, walk us through the company's four product lines? And as you do, can you highlight for us what are the company's core competencies that link these products together?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, happy to. So, first of all, welcome, everybody. There are some familiar faces you'll have to bear with the storytelling for a few minutes here. And for those of you who are new, not as familiar with Astera Labs, let me give you some context. So, we started the company in 2017 with a singular focus on connecting thousands of GPUs together. So, we believe that the AI workloads would require a new architecture where we would have many GPUs connect, and this workload will only be possible to run in the cloud. So, as Quinn mentioned, we founded the company to deliver AI infrastructure to run in the cloud. And that's what we have done pretty singularly over the last five or six years.

Now, what we did not imagine is that these models will grow into the trillions of parameters, and the thousands of GPUs will become hundreds of thousands of GPUs, and because of that, connectivity has become a huge bottleneck, whether it's a data bottleneck, network bottleneck, or memory bottleneck, and we have been solving these problems to make sure that the utilization of these AI clusters is as high as possible.

Today, even a best-run AI cluster is around 50% efficiency, so you give billions of dollars to the GPU providers and build your ASICs, and only utilizing that 50%, and the way we address this problem to bring up the utilization is by attacking these data bottlenecks and networking bottlenecks and memory bottlenecks, which gets us into the product lines that we have. We started with our Aries product family, which is PCI Express retimers that address the data bottleneck. These devices are in full production. We are many times known for our retimers.

We have a majority market share, shipped millions and millions of these devices. Following our Aries retimers, we came out with our Taurus Smart Cable Modules. These are Ethernet retimers that eventually get assembled into active electrical cables, and they are going to start ramping. They actually started ramping in Q3 and Q4, and we continue to see those ramp in the rest of the year. Then we came out with our Leo family of CXL memory controllers, and they address the memory bottleneck by delivering more capacity and more memory capacity and more memory bandwidth to CPUs and GPUs, and we should start to see them contribute meaningfully to our revenue this year.

Last but not the least, last October, we introduced our Scorpio family of products, which were, like our other families, purpose-built for AI applications. The Scorpio comes in two families. One is the P Series. The P Series is used for scale-out applications, and I'm sure during the rest of the talk, we'll get into what that is. The Scorpio X Series, which is responsible more for scale-up connectivity, which is a great opportunity for us.

To your point, Quinn, on how are we different from others, there are many ways we are different. There are speeds and feeds and power and latency and so on. I would say the one thing to point out is just the deep relationship that we have managed to establish with our customers over the last five or six years. Of course, it probably started out with them not really believing what we told them. And then what we told them came true, and we did not disappoint.

And so, over the years, it has become so close that we get involved in the design decisions of their future platforms as they are developing their products. Oftentimes, we tape out our chips at the same time as our customers are taping out their SoCs. And so, that gives us just an incredible amount of visibility into where the customers are going, what type of products we should build, what features to add, what features not to add, what timelines to target, and so on. And I think that's a distinct advantage that we have over our competition outside of software and chip architecture and things like that.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Yeah, that's where I was going to go sort of next was one of the other differentiating factors for Astera is your architecture. You've used sort of more of a software-defined approach where you have a number of microcontrollers, sensors embedded in your designs. Talk about that design approach because many of your connectivity competitors are more hardwired designs, DSP-based.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, so I've been doing semiconductor design myself for like the last 30 years. And traditionally, the way these chips were designed was to figure out what the problem is and put a solution in hardware, state machines and dedicated hardware and so on. And that was good enough. Those chips actually worked pretty well, solved the problem very efficiently, and life was okay. But that was also the era where these solutions lived for many years.

You developed a chip and it continued to work in that protocol for that application for several years. That is absolutely no longer true in this kind of AI world where the design cycles are now shrinking to one-year cadence. And so, those traditional architectures are simply not able to keep up with sort of the evolving use cases, applications that our customers want.

And so, we were lucky enough or fortunate enough to realize this and build our chip from the ground up with a software-first approach. So, we do whatever we can possibly do in software. And when the software is not fast enough, then we will use hardware to supplement that. And what that does, it results in a chip that is extremely flexible, one that we can customize for different applications. We can customize it for different hyperscalers.

We can tune the chip to deliver optimizations that our customers want. And on top of that, when you start to put hundreds of thousands of these chips into your AI infrastructure, the diagnostics and telemetry capabilities become really, really important. And so, we added those on top of the performance-determining parts of the chip.

So, we have very performant chips that are very flexible and give you an incredible amount of diagnostics that let you manage your infrastructures. Over time, really, we have become the eyes and ears of our customers' connectivity infrastructure. And the way we make all of this available to our customers is through a software we call Cosmos. So, there are parts of Cosmos that run on the chip that are responsible for the optimizations and the customizations and the diagnostics features that I talked about.

And there are other parts of Cosmos that are running in our customer's cloud stack. So, when they are running, they are managing their infrastructure. They pull all of this information out of our chips and then act on it in an intelligent manner. And this provides us with a lot of stickiness. It's just not possible anymore to just rip and replace.

If we take our chip and we white-label it and give it to somebody else, they still will not be able to displace us because of this strong software component that we have. And this Cosmos software is applicable to all of our products. It supports all the products. It's very easy for anybody to upgrade from one generation to the next. And it's much easier for us, I would say, not very easy, but much easier to introduce a new product under the same Cosmos umbrella.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. I wanted to sort of now move into some of the product lines, maybe starting with the Aries and the PCIe retimers. The business really sort of took off about six quarters ago as you launched, or as NVIDIA launched its Hopper platform with pretty significant Aries retimer content on it. As we look forward now to the launch of Blackwell, talk about sort of the puts and takes for Aries content on the new Blackwell-based systems.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah. So, this Blackwell launch of Blackwell did create a lot of confusion in the investment community. We have been aware of what Blackwell means two years ago, since two years ago, because that's when we started working together with the GPU provider as well as the eventual hyperscaler customer on what it would take to deploy Blackwell at scale.

But to be crystal clear, if we look at the Blackwell platform as a whole, our content, Astera's content, goes up relative to the Hopper generation. However, it goes up because of the additional Scorpio content, which is a PCIe switch fabric, and not just because of the Aries. In fact, Aries content goes down on the whole with the Blackwell platform. Within that, if you look at each individual form factor, whether it's an HGX board or an MGX or what have you, the content differs.

Sometimes it's the same as Hopper. Sometimes it's less than Hopper. But in the cases where it's much more, and those are the cases where a hyperscaler is customizing the Blackwell platform for their own infrastructure needs, we have an amazing amount of content that kind of overshadows any loss in retimer content. And I'll make one other comment that if you kind of zoom out of that and look at the retimers as a whole across GPUs and ASICs, then the retimer content goes up as well.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

I wanted to, obviously, the hyperscalers with their custom-designed ASICs is a growing part of the business. Late last year, AWS hosted its re:Invent show, and I think they just demonstrated some of their new servers, so the Trainium 2 server has a pretty high attach rate for PCIe retimers. The Trainium 2 Ultra has both your retimers as well as the PCIe AECs, and so can you talk just about some of those platforms that have been shown in public about your content, and specifically, I think there was a GB200-based platform that may have had Scorpio content, so talk about your content there and sort of some of the Amazon platforms.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

And so, clearly, we'll be very restricted in talking about the platforms themselves, let alone our content in it. But I think we can definitely talk about the application. And there are certain things that are easy to see. For instance, our PCIe AEC, based on Aries retimers, you can see them. The important part about this launch that Amazon had and this architecture is we now are able to play in the scale-up network. So, going back to the Blackwell family, NVLink is a scale-up fabric for the Blackwell family. So, we don't get to play in that. We only get to play in scale-out. But with these other architectures where people are using PCI Express-like fabrics for scale-up, we have a huge opportunity. You mentioned PCI Express.

That's something that has been ramping now for a quarter and a half, and we continue to see strong trends because scale-up requires a large amount of interconnects. You're connecting a lot of GPUs, all-to-all connectivity, and that just dries up a lot of volume. There is also a lot of application for the Scorpio X family in these architectures, again, because you're trying to connect these GPUs together, and each GPU must talk to the other GPU. And the last part was the Grace Blackwell. Actually, I was there at re:Invent. I gave a talk at re:Invent. But they had the Grace Blackwell-based server on display, which is actually unique for AWS. Typically, they don't display this. And it was a thing of beauty, honestly.

It had two Grace Blackwell boards that were coming from NVIDIA with basically two Grace CPUs and four Blackwell GPUs and nine Nitro NICs. That's all you could see on the board. So, that's all I can officially talk about. But that kind of gives you the idea of the application where you are trying to customize and deliver the power of an NVIDIA GPU or an NVIDIA Grace Blackwell combination into an infrastructure, into a form factor that is suitable for your infrastructure.

Now, in order to connect eight or nine NICs to the four Blackwells, you need a solution that is like a Scorpio switch for scale-out connectivity where you can take GPU on one port and NICs and storage and other things on the other ports. So, both were very, very impressive demonstrations from AWS and very good for us.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. Competition in PCIe Gen 6 devices is sort of heating up. You've had Marvell, Broadcom, Credo all announce the devices. Marvell and Credo, I think, talking about better reach of their systems. I think Marvell highlighting perhaps lower power. But how do you expect to stay ahead, or what do you do to stay ahead in the PCIe retimer market?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, I think a lot of people are talking about these different power specs and latency specs. We are talking about design wins. So, I think that really is the difference. We introduced our product. We saw it. We showed it working at GTC last year. Some of these announcements came around then. I think Broadcom announced at the same time, and some of them have followed. But if you look at, or at least to the extent that we see, we don't see anybody else with a viable Gen 6 product yet. It'll change. I'm sure it'll change. They will get it all working and so on. Now, we were able to deliver our Gen 6 product first because we are building on the legacy of Gen 5.

We learned so much in the last three, four years when we have been shipping Gen 5 of what it takes beyond what's written in the standard to make these products work. And all of that knowledge through our Cosmos platform goes from Gen 5 into Gen 6. And so, we are able to ramp that much faster. The other thing, the important thing to realize is this is the time when our customers are ramping their Gen 6-based solutions. We've heard about Grace Blackwell ramps and so on.

And if you're ready with your solution now, great, such as we are. But if you're not ready, then I'm afraid sort of the train is leaving the station. So, at this point, maybe we are a little bit more cautious back in March when we introduced this. There is a lot more confidence now that we will be able to keep our position in the Gen 6 generation just like we had for Gen 5.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. Let's turn to the Taurus AEC product line. You talked about the ramp of the 400-gig products here in the second half of 2024. Talk about sort of how do you see that business in 2025? When do you see sort of broader adoption and the move to 800-gig on the AEC front?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, no, actually, Taurus doesn't get talked about often enough, but it's very exciting for us because it's actually going to be the second product line that ramps in volume, and we have seen that ramp happen already in Q3 of last year, then a full quarter in Q4. We'll announce the results during our earnings call, but so far, living up to our expectations.

And there is also a lot of headroom to grow at 400-gig. The thing to realize, though, is 400-gig is still a niche application. So, not everybody is going to adopt active electrical cables solutions at 400-gig. Our approach at 400-gig or our opportunity at 400-gig is differentiated as well as broad within that one account. We have multiple applications, both AI as well as general-purpose compute.

We have multiple form factors, X cables, Y cables, straight cables, and we are enabling multiple cable vendors at 400, so there is some diversity within the main customer, but nonetheless, it's still fairly concentrated. Now, as we go to 800, that will change. We are seeing a little bit of the same fragmentation now happening at 800 as well.

Whereas I would say previously, we figured everybody would use active electrical solutions for 800-gig, what we are finding is some people are able to re-architect their systems so that passive cable is good enough, and if passive cable is good enough, nobody will use active, and also finding people who are not able to contain the connectivity to one rack, they need to go to multiple racks, in which case optical becomes the solution, so all of these will coexist. In general, the 800-gig customer base will be broader than 400. That is true. And we'll see volume ramps pushing into 2026.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

It looks today that you've got AEC certainly in the Nitro, sort of the first stage of scale-out networks. I think Amazon and perhaps Google may also be starting to use AECs in their scale-up networks. Can you talk about the use cases? Is it, as you look forward, do you see it in both scale-out, scale-up? Does it tend to lean one direction or the other?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

It really depends upon the hyperscaler and their choice for scale-up. So, first, if you look at scale-out, that is very consistent across all opportunities. Scale-out is typically with Ethernet. So, you'll have a PCIe-based NIC. You may have a retimer connecting the NIC to the GPU or the CPU or not. But when you leave the NIC, you have an Ethernet cable that goes to the top of the rack switch. So, that application always exists across the board.

Scale-up depends upon the hyperscaler. So, where the hyperscaler is using a protocol that's similar to either Ethernet, such as, as you mentioned, Google, or PCI Express, as some of the hyperscalers do, we absolutely have a play. And we are seeing the benefits of that in both our product families, both the Aries SCM as well as the Taurus SCM.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. Maybe we move next to Scorpio fabric switches and UALink. Can you give us the background on how the company was able to sort of replace Broadcom on the board of the UALink consortium and what advantages to being on the Board of UALink does that bring to the company?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Absolutely. First of all, I can't tell you why Broadcom left the consortium. We certainly talk about why we got added. And the net result was a replacement. So, let's step back a little bit and talk about what UAL is and what UAL means. So, NVIDIA has NVLink, and they have been very successful using NVLink and building larger and larger clusters with the NVLink.

Other people are trying to do their own way today, whether it's Google with ICI or Amazon with their Neuron Fabric. Some people are using straight-up Ethernet. But there is a lot of diversity. So, I think there is, and there are pros and cons of each approach. So, UAL, what it tries to do is sort of consolidate this all at the industry-wide level, where you want to deliver the simplicity of PCI Express.

PCI Express is a very simple protocol that has been used for a lot of time. It's very easy to have one GPU talk to the other GPU. They start with PCI Express protocol and simplify it even more. But then they take the line rate, the SerDes or the PHY from Ethernet and marry it up with the simplified UAL protocol. The two together form UAL.

It could be the industry's kind of answer, if you will, to NVLink. There's a lot of promise on paper. In addition to us, we are the promoter members. Part of the reason I believe that we were invited to join is because we are already playing in the scale-up networks through our Scorpio X family of switches as well as Aries SCM.

And this is also the benefit that both we get and we can provide to the consortium of delivering an overall connectivity fabric that includes switches, potentially includes retimers over copper or other different media over time. So, it's a very exciting development. I would also caution that it's relatively new. So, we should wait and see which of the hyperscalers start to adopt UAL. And then the ecosystem will flourish around that.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

As I look at scale-up networks to date, they've all been copper-based, usually limited to just one or two racks. As you look to the future, I think the intent is to increase the size of those scale-up networks to get larger virtual GPUs or XPUs. What's your sort of vision on those scale-up fabrics? Do they remain copper-based? Do they start to introduce optical solutions? And I know you haven't announced optical products yet, but to the extent you see optics in these scale-up fabrics, could that be a future opportunity that the company would look to address?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

So, a short answer is yes. But again, let's look at kind of what the lay of the land is today. Most of the customers that we talk to are still limiting their scale-up clusters to one or two racks. And there are more reasons to do that that go beyond copper versus optics. So, that's what we see today. And we are laser-focused on enabling this connectivity through one or two racks. And some of them have a little bit more as well with copper.

And the reason we want to do it is just not because we are good at copper. It's because that's what the customers want. Customers like copper because it is more reliable. It is lower power. And it is less expensive. So, my belief is that if somebody can run their scale-up networks over copper, they absolutely will.

But as you correctly pointed out, as the sizes of these clusters go up and potentially data rates go beyond 200-gig, which, by the way, only NVIDIA is in that camp right now. Everybody else is just trying to get to 200 first. At some point, optical will become an important technology. And we are working very, very closely with our customers to understand what that time frame is and what solutions they need from us.

And that allows us time to build those solutions. So, optical is definitely a great area of exploration. We are watching it very carefully. And at the right time, we will introduce solutions that make sense. I mean, you have to understand Marvell and Broadcom have got a strong position in the optical space. And they are great companies. So, no point trying to be another third and try to take share from established players. We need to figure out where the unique opportunity is. And I do believe it comes with the scale-up networks and people adopting new protocols like UALink and so on.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

For UALink or just the fabric switch opportunity, can you give us a sense? What is that ultimate TAM or what's your kind of content opportunity as you look to build up these larger scale-up networks?

Nick Aberle
VP of Treasury and Investor Relations, Astera Labs

Yeah, so on Scorpio overall, we've kind of outlined a $5 billion TAM in total, of which about half is for scale-out and about half is for scale-up. If you look at the piece that's scale-out, that's a real market today. Broadcom shipping on or about $1 billion into that space today. But we do believe it continues to grow as accelerator growth happens going forward. But it's really that scale-up piece that gets really exciting.

That's effectively a zero market today that's poised to grow very nicely. There will be opportunities to grow across a multitude of different protocols, whether it be PCI Express, Ethernet, or UALink, as Jitendra outlined. So, we are very steadfast right now in terms of engaging with all those different parts of the market, engaging with the customers and building that up. But all these accelerators going forward are going to start to support scale-up in a big way. And we are hoping to enable that with various forms of Scorpio.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. If you look at the scale-up networks, you've got NVIDIA with sort of a switch-based architecture or fabric today. I think Google and Amazon use more of a Taurus or sort of a point-to-point nearest neighbor architecture. So, on the one hand, switch-based gives you a Scorpio X opportunity. The Taurus probably gives you lots of links and AECs from one GPU or accelerator to the next. Are you guys agnostic to which way the market goes, whether it's Taurus, whether it's switch-based? Because you seem to have content in both sides.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, that's a very good point. And in general, that's our story. We are enabling the AI infrastructure in the cloud. Sometimes these opportunities are based on pure-play connectivity products like the Aries and Taurus retimers. Other times, it's switching products like Scorpio. And we will have Leo show up there as well. So, to the first order, yes, we are agnostic.

The way we operate is we try to understand our customers' systems as thoroughly as possible and make sure that we have a trusted relationship with them. So, they tell us what products to build. And we will continue to operate in that way. If they ask us to build a different type of products that enables them to deliver better results to their customer, we will do that. If it's optics, we'll absolutely explore that.

To the extent that these solutions are switched, we do get Scorpio X Series. We'll continue to add to that roadmap. That's a huge opportunity just because of the ASP and how critical it is to the system design. I would say that's maybe one difference between Scorpio and the retimers, Aries and Taurus. With Scorpio, the engagement starts much earlier because it's so critical to the overall architecture of the system. In case of Scorpio specifically, we engage with our hyperscaler customers two years in advance. Whereas for a retimer-class device, it might be closer to when they figured out what the architecture is and the level of connectivity that they want.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

I've asked a lot about the Scorpio X family. Maybe just one on Scorpio P. As you look forward on the head node opportunity, talk about your entrance in that market versus the incumbent Broadcom, how you've designed your solution, where you've optimized it for AI, because you took a slightly different approach than what's on the market today.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, absolutely. It's a very good observation. As Nick pointed out earlier, if you look at the TAM today for scale for Scorpio-type switches, it's about $1 billion today. And majority of that is going into scale-out. And majority of that is going to Broadcom. So, they are indeed the incumbent. And the way they've got to this position is because they had these Gen 5 switches for other general-purpose compute applications for storage and so on.

And they are good switches. So, when these head node connectivity applications came up, we provided our retimers. And Broadcom provided their switches. And that's how these Gen 5 systems are built. When we started thinking of doing the Scorpio product line and getting requests from our customers, we had to do a couple of things differently in order to compete effectively against Broadcom.

One is build products that are going to be better than Broadcom. So, this is where your comment about, hey, we built Scorpio for AI is very relevant. We didn't try to be everything to everybody. It's a new application. We said, we're going to focus our switch on AI. So, it's optimized for AI workload. It delivers the performance as required for AI.

And it delivers the diagnostics capability that our customers have grown used to with our Cosmos software and so on. So, that was first. The second thing was we had to deliver it at the same time frame that our customers needed. And that is extremely important because if you can't meet the time frame of your customers, they will find some other solution.

And so, we were fortunate enough and with good guidance from our customer, deliver Scorpio in October or announce Scorpio in October of last year, working samples. Now, we have already shipped in pre-production units. They are going through qualification. So, we feel very good about kind of the timing of the Scorpio platform.

And last but not the least is we need to keep our customers happy. And that's really just in the DNA of the company. We don't play strong-arm tactics with our customers. They like working with us. We like working with them. And we would really work to keep that relationship going. And as a matter of fact, there is a lot of interest from our customers to enable a second vendor in the PCIe switching space. So, we benefit from those tailwinds as well.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. I wanted to switch now to sort of the CXL opportunity. Can you just sort of give us your latest thoughts on CXL? And we're still on track to see CXL memory expansion application ramp this year.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yep, that's our best view so far is to see that CXL will ramp this year. And there are a couple of reasons why CXL has gotten delayed. We've been talking about it for the last couple of years. But we think this is a year it happens because of two reasons. One is the CPU availability. The CPUs that were supposed to support CXL were Sapphire Rapids, which did not. Genoa, just in a limited way.

Emerald Rapids was the first CPU that supported CXL expansion officially. But it became sort of a half-generation CPU. So, it did not really get that much adoption in the cloud. So, now with Granite Rapids from Intel, Turin from AMD, and the equivalent ARM processors all coming out, at least we have cleared that hurdle.

And the second thing is with these CPUs coming on board, the hyperscalers are now replacing their old servers with new servers. So, some spend is going from AI systems back into general-purpose compute. So, because of those two reasons, we think this is the year for first deployments of Leo for CXL memory expansion. In fact, we are working very closely with the lead customer on rack scale qualification in the data centers. So, we feel good about that opportunity for 2025.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. If I step back, your solutions address bottlenecks in AI infrastructure. One of those bigger bottlenecks today, memory bandwidth, seems like it's going to become a bigger challenge, especially inference time scaling or test time scaling. So, are there ways, perhaps in the future, you'll be able to address that, not necessarily through CXL, but other technologies? Here, I'm kind of thinking about we're seeing a move to HBM4 and HBM4E, where you can customize base die to get better memory bandwidth. I mean, are there things you can do in the memory subsystem beyond CXL to improve the memory bandwidth in these AI systems?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

First of all, I'll say we will not talk about unreleased products. But at the same time, please keep the suggestions coming. We got 12 constellations to fill. However, for CXL, I think the first applications are going to be for memory expansion. We have already demonstrated the benefits of using CXL for in-memory databases. That's a strong driver. You can very, very easily see the return on investing in CXL-based memory.

People are building these large systems that can take 100 DIMMs per system, deliver terabytes of memory, where previously it was just not possible to do it or was very expensive because you have to use a new CPU. So, I think that's one use case. We have demonstrated CXL showing great results for high-performance compute applications, again, in general-purpose compute. So, that's great. And then last but not the least, our inference applications, as you talked about.

We've, again, demonstrated inference, doing much better inference, more queries, et cetera, using CXL. But somebody has to write the software to enable that. And that is really the long pole in using CXL for AI inference applications. I think over time it will happen. There are a couple of POCs that are going on right now. But it's too early to celebrate. The first would be memory expansion for databases and high-performance compute. And then AI will follow.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. One question has come up a lot over the last couple of weeks is the thought of using CPO and scale-up fabrics. I know you're not in the CPO space today, but you certainly have relationships with all of the leading XPU vendors and hyperscalers. And so, my guess is you have a view on CPO and just wondered if you could share that with the group.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, absolutely. It's a phenomenal piece of technology, and it has got its pros and cons, so kind of if I look at what the hyperscaler customers have done traditionally, and I strongly believe that they will continue to do, is first try to use passive copper. If they can use passive copper, they will use passive copper. If they cannot use passive copper, they will try to augment that with retimers and go active electrical cables or chip down retimers or whatever, but stay with copper.

When that becomes impossible, like you suggested when the racks increase, they will go to optics, but that is typically pluggable optics, and the reason for using pluggable optics is optics tend to fail. These are esoteric components and non-standard CMOS processes, and the failure rate is high. When a pluggable fails, you simply remove the pluggable and put a new one in. When that fails, when that cannot deliver enough bandwidth, then people will start looking at CPO.

If you kind of simplify all of that and see where will CPO occur first, I believe CPO will show up first in switches which have the highest density of interconnect and highest speeds. For example, a 100 terabit switch, for instance, or even a 50 terabit switch that Broadcom has demonstrated last year at the Hot Chips conference. That's where they will start to show up first. That's not an area that we play in today. I would say this is not a this year or a next year problem for us. At the same time, we are watching this very carefully.

If you look at the DNA of the company, it is about adding capabilities to the company. We started being a retimer company, if you will. Then we did the Leo CXL memory controller. A lot of people told us, you guys, you won't be able to do it. And then we are now in a leadership position. And last year, we introduced Scorpio. This is all based on conversations that we have with our customers. And we are having similar conversations with our customers now.

And to the extent we need to enable the optical interconnect, whether through a pluggable form factor or a different form factor, we will do that. But I will say CPO is not a cure-all. You will not start to see suddenly everybody shift to CPO. If you look at it from an industry standpoint, NVIDIA is probably the most ahead of anybody.

They are already deploying 200 gig, and they're doing it at copper. The rest of the industry has to catch up to 200 first, which I believe they will do with a combination of copper and pluggable optics, and then when we go to 400 gig, then some of the higher-end systems, switches, and such likely go to CPO first.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. My last question is just sort of thinking about capital allocation. You've got a lot of cash on the balance sheet. What are your primary uses of cash? You've grown pretty darn quickly organically. Would you look to supplement that growth through tuck-in acquisitions? Are you focused mostly on just growing the business organically? Any thoughts of buybacks or dividend return to shareholders?

Nick Aberle
VP of Treasury and Investor Relations, Astera Labs

Yeah, I mean, it's a really exciting time for the company, as Jitendra outlined. I mean, there's a tremendous amount of opportunity. We've earned the seat at the table to talk about next generation and then generations after that of different architectures and how we can contribute to solving all these bandwidth challenges. And in the meantime, speeds are increasing, complexity is increasing.

So, there's a tremendous amount of work that needs to be done. I feel like the DNA of this company is to do these things ourselves and to knock things down in an organic fashion. With that being said, our ambitions are to grow and to build the team and to bring in the talent and the IP necessary to be able to fulfill those obligations.

So, we'll very opportunistically and strategically think about adding teams, small pieces of technology in order to supplement or accelerate our efforts. At this point, I don't see anything big on the horizon. Never say never. But I think we have a good roadmap. We're bringing in folks and really focusing on building the team organically. And we'll bolt on things as needed.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

Perfect. We've got a couple of minutes left if there are any questions from the audience. Anyone?

My question may be a little bit biased. Being a former system architect, I always hated retimer stacks. So, in order to simplify my life as a system architect, I think there's a huge opportunity in having retimer cables to simplify the system architecture. How do you see the opportunity for retimer cables and electrical cables in general? And what are the optics of the opportunity there in the long term?

Jitendra Mohan
Co-Founder and CEO, Astera Labs

Yeah, that's squarely. I fully agree with you. Like I said earlier, if somebody can do a link without using retimers, that's absolutely what they will do. And we can't fight with that. But as data rates go up and you get more connectors and so on, you end up requiring retimers. And we believe that actually a good place to put them is in active cables.

So, that's where our Taurus Smart Cable Modules, as well as Aries Smart Cable Modules, play. We build these modules. We give them or we sell them to the cable vendors. And they build active electrical cables. And these are actually now getting deployed at scale, starting with Q3, full quarter in Q4. And they will continue to ramp up for the rest of 2025. So, it's a big opportunity for us. We are very excited about that.

Do you think it gets even bigger than what most people are forecasting? I think Marvell said it was a billion-dollar TAM. But clearly, I think it's a low estimate. But I want to get your sense on how big the TAM is?

Nick Aberle
VP of Treasury and Investor Relations, Astera Labs

Yeah, it's very sizable. And we're attacking it from a couple of different avenues. We're attacking it from the PCI Express angle with our AECs and modules for that part, and also from the Ethernet side as well with Taurus. So, both of them are substantial market opportunities. And they will both grow over time. I think it's very interesting talking about the scale-up piece of this, which is very early in it's kind of evolution here, where we're just starting to see that application roll out en masse across PCI Express, leveraged by kind of a small set of customers.

But that could expand and then ultimately evolve into something like UALink that's leveraged even across a greater number of parts of the ecosystem as well. And then on the Ethernet side, speeds will continue to increase. And it'll continue to put pressure on pushing signals across distances. So, we believe that's going to be another big driving factor there. But yeah, we see avenues of growth on both sides of the house.

Jitendra Mohan
Co-Founder and CEO, Astera Labs

So, just to put this in context, these are healthy TAMs. And we'll continue to focus on them. But if you compare it to the TAM that's enabled by, let's say, a CXL memory expansion solution or Scorpio with PCI Express or PCI Express-like switching, they are far greater. So, we're certainly exploring those. And our presence in there is very small. So, the growth rate for these TAMs is going to be very, very nice for us.

Quinn Bolton
Senior Equity Research Analyst and Managing Director, Needham & Company

I have time for one more question, if there's one from the audience. OK, we'll.

Powered by