Astera Labs, Inc. (ALAB)
NASDAQ: ALAB · Real-Time Price · USD
202.68
+7.94 (4.08%)
At close: May 1, 2026, 4:00 PM EDT
203.10
+0.42 (0.21%)
After-hours: May 1, 2026, 7:59 PM EDT
← View all transcripts

Barclays 22nd Annual Global Technology Conference 2024

Dec 11, 2024

Thomas O'Malley
Analyst, Barclays

All right, welcome back. I'm Tom O'Malley, semi and semi-cap equipment analyst here at Barclays. We're at the Global Technology Conference. I'm pleased to have Astera Labs here, Jitendra Mohan and Mike Tate. Thank you for joining us. Really appreciate it.

Jitendra Mohan
CEO, Astera Labs

Absolutely.

Thomas O'Malley
Analyst, Barclays

So why don't we start from a 30,000-foot view here because it's been a crazy ride since the IPO? So you launched kind of talking about a PCIe retimer portfolio. You had three big product portfolios. Why don't we start with walking through each one of those and kind of maybe the Aries product line a little bit more because it was driving the growth today, and then we can kind of dive into each.

Jitendra Mohan
CEO, Astera Labs

Yep, absolutely. Thank you all for attending. Some of you are familiar faces, so maybe it'll be a little bit of a repeat, but bear with me since you asked me to start from the top. It's been a wild ride much before the IPO. It's just gotten a little bit wilder since. So we started back in 2017, and our view was actually very straightforward. We thought that the AI models will become so complex that you will have to run these training workloads in the cloud. And we wanted to be the first company to provide the connectivity infrastructure for the cloud, and that's what we've done for the last seven years. What has happened is in that time, the models just exploded. Now we talk about trillion-parameter models. You've never even heard of those.

And of course, they are all running in the cloud, but one of the things that has become extremely important is to maintain or improve the utilization of the cloud infrastructure because our customers are putting so much dollar amount in these billions of dollars in this infrastructure that running them at peak capacity is extremely important. And what we have been able to do is to solve this problem in many different ways, starting with the Aries product line term that you mentioned. And what the Aries product line does is allows you to send data at ever-increasing rates between different components that make up an AI server. AI server itself has evolved over even the last six months, but in general, think of PCI Express as a nervous system of a server.

If you want to connect storage, NIC cards, GPUs, even FPGAs and others, all of this connectivity happens over PCI Express. And as the GPU compute is getting more and more powerful, connectivity speeds are going up, and so the signals simply don't reach from one point to the other. And that's where customers use a retimer device that, in principle, allows the signal reach to double or, in some cases, even triple. And that's very easy to see. And we were not even the first people to come up with the retimer. There were others before us. The reason we got successful is because of the unique architecture that we had with our products, which is really rooted in software. So we do all of the protocol processing as much as possible in software, and then we expose the software called Cosmos to our hyperscaler customers.

We will talk more about that later as well, I'm sure. But because of this, we were able to gain a lot of market share. At Gen 5, we've been in production now for a couple of years, and we have probably 90% plus market share. So that was all very good. Earlier this year, in March, we also introduced our Gen 6 device. So that's a new device. We were again able to introduce it very rapidly to meet the time frame that our customers require and really leverage all of the knowledge and experience that we built with our Gen 5 product to rapidly scale Gen 6, to get it to work, ship it to customers, and get the learnings in the field. At this point, six or so months after our introduction of PCI Express Gen 6 retimers, we feel pretty confident about our position.

Of course, there are a lot of other folks trying to do the same thing, but because of where we stand, I think we are in a good path to sort of repeat, barring any catastrophes, repeat the success that we had in Gen 5. And then the battle will move on to Gen 7 and so on. That was the first product family. Second is Taurus. Taurus is a little bit different. It does for Ethernet what Aries retimers did for PCI Express, except that Taurus is predominantly deployed in an active electrical cable form factor. And we sell a module that includes the retimer, the optical, sorry, the copper DSP that we have, as well as some other components that then different cable vendors can go and build a whole cable with. And so we play to our strengths. We are good in the retimer DSP technology.

We know the electrical signaling. We don't know about cables that much. We don't know how to build back shells or stock inventory. So we focused on where our strengths are. And that model, certainly it's our biased view, is a very good sustainable model. And we are seeing a good amount of traction with that. There are other competitors, some that build the whole cable, others that provide the chip itself. So we will see how it all pans out. I think it's a competitive market. People will definitely fight for share. But the good news is the overall market is growing in size. So there is definitely room for other folks as well. So that's Taurus. Third one is Leo. So Leo is a little bit different. Aries and Taurus address the data and networking bottlenecks. Leo addresses the memory bottleneck.

So what Leo does is allows a system to expand, to add more memory capacity and more memory bandwidth using a protocol called CXL, which is something that basically runs on top of PCI Express. This product is extremely unique in what capabilities it offers. We've had this now in silicon form for about two years, and we've learned just immensely. And in 2025, we should start to see the first deployments happen now that the appropriate CPUs are available. So a lot of the, at least the revenue goodness from Leo family is yet to come. And last but not the least is the Scorpio family. So of course, very excited about Scorpio. Scorpio is our entrance into the PCI Express Gen 6 switch market. We announced that in October. And again, we continue to build on this theme of better utilize the infrastructure that the hyperscalers are deploying.

The journey for Scorpio actually started maybe two years ago when we were approached by the AI platform providers and the hyperscalers to say, "Look," and this is actually a very important point, "Look, two years out, we plan to deploy our system that will require the AI platform provider to build this chip, for us to build this chip, and you, Astera, we would like you to build this other chip," which eventually became the Scorpio P family. So the Scorpio P family effectively allows an ASIC or a GPU to connect to a scale-out network. It allows the GPU to talk to other networking devices, NICs, SSDs, CPUs, and so on, fairly widely applicable across different GPUs and ASICs.

And then on the other side, we have the Scorpio X family that is equally exciting, maybe even more so, because what the Scorpio X family does is addresses this scale-up network. And this is a growing TAM, $0 billion today, if you leave NVS witches out of it, and allows you to connect multiple GPUs together, optimize the performance of these clusters, customize it for the unique requirements that the hyperscalers have. So very excited about all of the four product families so far.

Thomas O'Malley
Analyst, Barclays

All right, so we started at the 30,000-foot view. We're going to hop right into the hot topics now of each. So we'll get right into it. So I guess on the Aries side first, so I guess the big debate in the stock had been, look, with traditional HGX deployments, you had H100 GPUs, they were talking to some x86 CPUs, and you had retimers that were sitting across those. There was a debate when you moved to NVL systems, would you see the same content there? And you guys initially started saying, "If you look at Gen over Gen, Hopper and Blackwell, you would see increased content." Could you talk about certain ways in which you're seeing your retimer product enter certain applications and deployments when it's not sitting just in the traditional way that you saw with the HGX?

Jitendra Mohan
CEO, Astera Labs

Yeah. So yeah, there was definitely that debate, and you have to appreciate that there are only so many things that we can say, right? We have to wait for our customers and the partners to first make their announcements before we can. I don't know. How many of you went to re:Invent? Oh, you guys really missed out.

Thomas O'Malley
Analyst, Barclays

Semi-heavy crowd.

Jitendra Mohan
CEO, Astera Labs

It was very impressive to see some of the items that were on display there, especially in the AWS booth. They have the Trainium Ultra server, which is basically a two-rack side-by-side solution, a lot of content there for connectivity, of course. They also had on display a server based on GB200. So two GB200 boards connected to nine of their Nitro NICs. So this is a great example where things evolve. Who would have thought about this in the past? When we started out with the Hopper generation, our connectivity was between the GPU and the CPU Head Node, and oftentimes went through a switch in between. With the Grace Blackwell platform, Grace and Blackwell went on the same board, and so the distances are short, and you don't need a retimer anymore.

However, that just means that there is now a requirement for other types of devices. So just to be very clear, our retimer content overall with Blackwell does go down, but that's okay and was something that we've known about for a very long period of time. We couldn't talk about it. That's a different story. What does happen now is we add our Scorpio content, which more than offsets any kind of a share loss, if you will, or opportunity loss with the retimer. So if you kind of simplify this, with the Blackwell generation, our overall content goes up, and it goes up because of Scorpio. I think this was the big debate that people did not know until we actually announced Scorpio.

But if you go up a little bit and you look at the retimers overall, including not just the third-party GPUs, but also the ASIC platforms, then even the retimer opportunity goes up. And the reason for that is we are able to participate not only in the traditional scale-out designs that you've seen with the Hopper family, but also in the scale-up designs. And that's a very rich area for us to play in. Just because of the number of links that are available are a lot more, they tend to run at the highest throughput. And we have benefited with the Aries SCMs, smart cable modules already launching in Q3, full quarter, this quarter, and will continue to ramp in 2025.

Thomas O'Malley
Analyst, Barclays

Yeah, I'm going to hop to Scorpio because I want to talk about the ecosystem and building the scale-up and scale-out network. But when you talk about the P switch, X switch, the P switch is traditionally in a realm where you have competition from much larger players, and you're talking about that as a big driver going forward. Can you talk about your competitive advantages because of your success in the retimer market thus far and how you see yourself competitively positioned in that P switch into the next generation? Because that's obviously driving the year, well, the generation-over-generation content increases.

Jitendra Mohan
CEO, Astera Labs

Yeah. So just to kind of put this in context, if you look at the P family, P stands for PCI Express, that's easy for you to remember, Scorpio P-Series. That's the traditional scale-out network. And if you look at the current system deployed today, Gen 5, you are absolutely right. They are built typically with a Broadcom Gen 5 switch. Now, what we did with Scorpio P-Series is we did not copy what Broadcom had done and say, "Here is a Gen 6 version of that switch." Maybe Broadcom will do that all by themselves. What we did is we looked at what the architecture needs to be for AI applications. So I'll say that the Scorpio P-Series and for that matter, the X-Series are built ground up for AI applications.

We did not focus on the traditional applications of compute to storage, compute to NIC, and so on. We really focused on GPU connects to NICs or GPU connects to storage, and of course, there happens to be a CPU as well. So it's built ground up to give you the performance that you need for these AI workloads, again, to get the best utilization out of your GPU infrastructure. So that was one. Second, our early access, right? Customers actually are coming to us and saying, "We want you to build this switch." So clearly, it's a big market, and there is room for more customers, more players. But customers came to us, and that really speaks to the relationship that we have with our customers, the trust that we have with our customers.

We work extremely hard to make sure that we deliver this switch in the time frame that they need to deploy their systems. If we cannot do that, then no amount of performance is going to be helpful. Of course, we are hoping that we would even be faster than Broadcom, which turned out that we are so far. I wouldn't be surprised if there is an announcement from Broadcom or other people. We have worked very hard to build a better product. We meet our customer timelines. We are first in the market. Last but not the least, we also have to support our customers exceedingly well, both from a technical standpoint as well as supply chain, as well as kind of commercial standpoints. It's one thing to deliver a product, but another one to ramp it in volume and so on.

That's where our Cosmos software, which is already integrated at the hyperscalers, becomes very important.

Thomas O'Malley
Analyst, Barclays

And can you talk about the value that the switching product brings to you? Obviously, there's a certain number of channel lanes that are handled by a retimer. When you move into the switching ecosystem, I would imagine that that expands. Can you give us some form of numbers-based or just ideological base on how much more those ASPs can increase there with the P Series specifically? And we'll get to that.

Jitendra Mohan
CEO, Astera Labs

Allowing these mic and hard questions outside.

Thomas O'Malley
Analyst, Barclays

Hard questions. Yeah.

Mike Tate
CFO, Astera Labs

Yeah, I mean, you want to look at it on a cost per GPU. So it was Aries only, it was sub $100 per GPU on average. Now it's multiples of $100. You don't want to look at it on a per chip basis because there'll be a portfolio of products. It's based on lane count, but it's a significantly more value-added part than a retimer, one, because the lane count generally will be multiples higher, but also you have a switch fabric as well. So it's a much higher ASP.

Thomas O'Malley
Analyst, Barclays

And then when we look at that, how do we think about the X series? So the X series obviously came in the announcement. I think that was a surprising bit to some people, at least to me, when you think about, "Okay, you're really looking at a scale-up architecture." And outside, I think you said of the NVL switch, you really didn't see anything like that in the market to date. So two things you're kind of talking about. You're talking about X switches sitting in a rack with some PCIe cables that are sitting next to them. How far behind is the Ethernet PCIe world versus the NVL world today? When can those deployments start hitting the market? Because I would imagine they come very much in tandem because if you're scaling up, you obviously need the switching ecosystem as well as the PCIe cables.

Jitendra Mohan
CEO, Astera Labs

Yeah, no, two questions. One is how far behind is that. That's a subjective question. NV switch, I don't know which generation they're on, but they are pretty far advanced, and they have this architecture figured out. They have the software piece figured out on how to build clusters. I believe the latest ones go up to 500-something GPUs. But the sweet spot is much smaller than that. NVL72 is the rack that's supposed to be the most deployed. And so in that particular scale, we can achieve that scale with PCI Express. How exactly customers do it is proprietary to them, so we should not comment on that. But we can enable that with PCI Express. It's a complex equation between how many GPUs you want to connect, what does your software look like. Because PCI Express makes it actually very easy to address a lot of the GPUs.

If you try to do the same thing with Ethernet, you get faster speeds, but you don't get that ease of software. So a lot of the performance goes away because of the added latency of RDMA and things like that. But nonetheless, I mean, Scorpio X is a very, very good opportunity for us, and I think we will continue to build more products to address this fully. The other thing to point out here is it's not only going to be NVLink and Ethernet and PCI Express. There is the UAL standard that has just come about, which really tries to give an open ecosystem for everybody to build their own scale-up networks, and of course, in some ways to compete with what NVIDIA already has with NVLink. So that's a great opportunity. We are a promoting board member. All the hyperscalers are part of that board.

So again, this sets us up very well to participate in this scale-up TAM.

Thomas O'Malley
Analyst, Barclays

Another hard question for Mike is if you look at the added benefit for the X series, even versus the P series, the way that I think about it in my head is you have an Ethernet switch, and then you have kind of an X switch, and then you have a P switch. Is that the right way to think about it from a value perspective as well? Is the X switch going to do a lot more? Because you're obviously handling multiple servers potentially versus multiple lanes with a single server. How should we think about the added benefit there?

Mike Tate
CFO, Astera Labs

The X series, because it's customized, it will have a higher value for us using our Cosmos software. But the unit volume is much higher as well when you're supporting PCIe because of this mesh network topology. So we actually, the X series TAM right now is just starting. It's greenfield, zero. But we see it growing to $2.5 billion by 2028, which is significant growth. For the P series, it's a $1 billion TAM right now, but it's going to also grow to about $2.5 billion as well in 2028. That's how we get to the $5 billion. But ultimately, the X series will be a larger segment, and that can be even better for us if UAL takes off widely.

Thomas O'Malley
Analyst, Barclays

Are they not going to use a proprietary PCIe or Ethernet standard when the UAE settles on their ultimate design, or will there be variables? I mean, the idea is that it's standards-based, but this goes back to the question of which way is the market moving? Is Ethernet going to come down with active electrical cables running Ethernet, or are you going to see PCIe cables? Do you guys have a preference for which way that world goes? Because you kind of serve both markets, right?

Jitendra Mohan
CEO, Astera Labs

Yeah, and then that has been a theme. We are Switzerland, right? If you want to scale with a GPU-based cluster, we're happy to support you. If you want to do ASICs, we're happy to support you. And a couple of things I'll mention. First of all, hyperscalers always customize. They're not really slave to any particular standard, whether it's PCI Express or Ethernet or in the future UAL. So even if there is an open UAL standard, that will become the base, and then we will start to do some customization for the hyperscalers there. The power of UAL is really it brings you sort of the simplicity of PCI Express with the speeds of Ethernet. And we will have to see how exactly it plays out. If you look at Ethernet and what they are trying to do with UAL, there is a place for that, absolutely.

Because when you leave a cluster, when you go from scale-up to scale-out over Ethernet, I mean, PCI Express cannot do it. I don't think even UAL will be able to do it. So for that, you have either Ethernet or InfiniBand, and I think that debate is kind of getting settled towards Ethernet as before. So outside of what Mike mentioned, the TAMs for P Series and the X Series, the Ethernet TAM is much larger. Of course, Broadcom is the big gorilla in that space already.

Thomas O'Malley
Analyst, Barclays

Super helpful. Why don't we switch over to the Taurus segment? So you've recently seen some evangelizing of that market. Marvell put out an announcement with Amazon talking about a large agreement, including AECs, etc. You guys have talked about shipping to hyperscalers. Can you talk about, and this is more of a comment on the industry, in your eyes, why have AECs seen this inflection point? Is it really the upbringing of custom silicon? And when you look at your product versus your competitor's product, just you've explained the form factor difference already, but what else gives you an advantage to kind of keep market share at where you guys are at today?

Jitendra Mohan
CEO, Astera Labs

Yeah, I think just to kind of first talk about from an industry standpoint, it's not necessarily an inflection per se, but it was something that we have been talking about for a while. We've been shipping our 200-gig Taurus products for some time now. We started ramping our 400-gig in Q3, and now we will ramp that in Q4, and this will continue in 2025. So I would say at the 400-gig data, still it's kind of a niche application. Some hyperscalers will choose to deploy AECs because they have certain requirements on how thin the cables need to be, what length they need to go, etc., etc. Some other hyperscalers might be perfectly okay with a thicker cable, or maybe they have an architecture that doesn't go as long. So in general, when somebody can deploy a passive cable, they will use a passive cable. It's a no-brainer.

If they cannot, then they will go to active electrical cables. And when they cannot do that, then they will go to optics. So I think there will be different inflection points along the way based on which hyperscaler adopted it and is ramping. If you look at the general-purpose compute applications of a server to traditional top-of-the-rack switch, they will stay 400-gig. So they're unlikely to move to 800 anytime in the near future. So it's the AI applications that will drive 800-gig deployment. And some people will do it with copper. Some will do it with active electrical. Some will do it with passive. And some may need optics if the top of the rack switch is not at the top of the rack. So it'll be different, and I think different people will get different shares.

It's going to be a competitive situation on the Taurus side.

Thomas O'Malley
Analyst, Barclays

When you guys look at your Aries portfolio and your Taurus portfolio, you guys have talked to the entire ramp about how you are exposed to all hyperscalers. Can you talk about different concentrations among those two? Is it more focused on one customer in the Taurus portfolio, or is it still servicing kind of everyone?

Jitendra Mohan
CEO, Astera Labs

Yes, specifically on Taurus, it is niche, and we have a lead customer. That's what we are focused on for the 400-gig deployment, and of course, the reason we are unique outside of the form factor is, again, the Cosmos software, what we are able to do, not only in terms of the raw link performance, but what we can do from a security standpoint, from managing the links, collecting the diagnostics, etc., being able to update the firmware on the cable without bringing the server down. There are some kind of unique differentiations that we are able to offer, which hopefully will enable us to get our share, our fair share of the business.

Thomas O'Malley
Analyst, Barclays

Okay, let's move to the last one in terms of Leo. So I think that this is oftentimes in my conversations the most difficult one for investors to grasp, just understanding how there's a memory wall and how we're knocking on that door today. And ultimately, people feel we're looking towards memory pooling and AI. Could you talk about the first stage for you guys with CXL? How are you guys going to enter that market? And how is that differentiated from kind of the end state where you do look more like pooling? But what's that first step? Can we spend a little time?

Jitendra Mohan
CEO, Astera Labs

Yeah, we've been very consistent in saying that the first deployment for CXL will happen in general-purpose compute, and they will be for memory expansion. So memory pooling is kind of the sexy term to talk about, but it is far enough away for various reasons: availability of CPUs, availability of standards, and so on. And we've been in this space now for 18 months, maybe closer to two years, where we have learned a lot. We have a lead customer that is doing now rack scale qualifications. And the thing to understand about this opportunity is everybody talks about CXL, CXL memory expansion. Actually, it's a lot more about memories. Clearly, CXL enables it, but the way each hyperscaler and each CPU vendor deals with memory is different. And that's what we've learned over the last several months.

And we have included these customizations in our Cosmos software that runs on top of the Leo platform, both in our chip as well as in our customer. So we think that we have a strong competitive moat with the Leo platform. And now with the release of Grand Rapids CPU from Intel, Turin from AMD, and equivalent ARM CPUs, kind of that hurdle has now been removed. So we should start to see Leo CXL, Leo-based CXL deployments in 2025 for memory expansion. Now, why memory expansion? That ROI is very clear. At least it's definitely in our head is very clear. As you go to these bigger CPUs with lots and lots of cores, your memory bandwidth gets limited. And so the only way, or one of the easiest ways to increase this memory bandwidth and memory capacity is to attach memory over CXL.

It gives you the ability to hold your database in memory, get more benefits for general-purpose compute, and our customers see it. So there is a lot of excitement, but it needs to translate to revenue in 2025.

Thomas O'Malley
Analyst, Barclays

Gotcha. Okay, so if I look at all these product portfolios, back to Mike, you obviously, versus the IPO model when you guys first came out, you've introduced a new product family, and it's rather significant. I think you've talked about something about 10% of revenue or maybe even more going into the calendar year 2025. It's a chip-based solution, so I would assume gross margins there are generally a bit better than if you're looking at an entire product. How does this change the gross margin trajectory into kind of 2025? And then going forward, obviously, this is a little different than you initially laid out. It seems more favorable. Is that the right way to think about it?

Mike Tate
CFO, Astera Labs

Yeah, Scorpio will have a broader opportunity set. So depending on lane count, Gen 5 versus Gen 6, the market's moving so fast that we've identified the sweet spot in addressing multiple opportunities, both higher-end and lower-end with that. So we will have a wider range of margins for this product line. And over time, we will broaden the product portfolio and have a more targeted opportunity. So although the higher-end is very margin accretive and blended, we still believe that this will maintain our margin model of 70%.

Thomas O'Malley
Analyst, Barclays

Got it. And then in terms of your ability to keep spend at a place where you get leverage in the model, can you talk about what you're doing in 2025 to make sure that this massive uptick in revenue kind of goes to the bottom line?

Mike Tate
CFO, Astera Labs

We're actually focused on investing in the business right now. We're not focused on leverage and operating margins longer-term, we are, and we feel very comfortable that we can be a very profitable company, but right now, there's so many opportunities and they're very large opportunities that now is the time to invest in the company. Now, we're driving operating margin leverage right now because our revenues are growing so fast, but if we had our way, we would actually step up and invest even more in the business right now.

Thomas O'Malley
Analyst, Barclays

In terms of capital priorities, investing in the business is one. In terms of your technology profile, it's so interesting. If you look at retiming in general, it cuts across optical, PCIe, a variety of different form factors. The one area where you guys don't play is kind of like the optical DSP space, which is where there's an incumbent. In terms of your technology profile, do you think that there's any areas that you see kind of on the come still? You've surprised us twice already, so it's a little much to ask you for something else. But where do you kind of see the next direction of the business going? Do you feel like you've now kind of revealed all of your cards in terms of your technology profile, or there's still more to come?

Jitendra Mohan
CEO, Astera Labs

I think we'll continue to invest in new technologies. Specifically, if you talk about optics, and it's no surprise, at some point as the data rates go up, we are going to intersect with optics, even where we are today. So today, the way to look at it is if it's a connectivity that's happening at the rack level or maybe two racks adjacent to each other, we would like to keep it copper. Not because we are good in copper, just because our customers want to keep it copper. It has benefits on power, it has benefits on reliability, and it has benefits on cost. So we will continue to drive the next generation to run over copper. But at some point, it's going to intercept with optics. And there is a huge existing market for running optics on the east-west traffic connecting switches together.

So it's a great area for us, and we are definitely exploring it. Exactly what wave shape it takes, we will talk about it closer to when we are ready to productize it. Right now, it's mostly in exploration.

Thomas O'Malley
Analyst, Barclays

It's been a great couple of months. Congratulations, and thanks for being here, and look forward to a great 2025. Thank you very much.

Mike Tate
CFO, Astera Labs

Thanks.

Thomas O'Malley
Analyst, Barclays

Thank you. Appreciate it.

Mike Tate
CFO, Astera Labs

Thank you.

Powered by