Well, folks, thanks for joining us. My name is Simon Leopold, a semiconductor and data infrastructure analyst here at Raymond James. We're at the second day of our TMT+C conference. It's Telecommunication, Media, Technology, and Consumer here in New York. Thanks, everyone, for joining us. So this morning, kicking it off with a fireside chat with Astera Labs. And we have with us Nick Aberle, who's the VP of Finance and Investor Relations. So fireside format, some Q&A that we prepared. Let's kick it off with, how do you like to introduce the company to a potential investor who may be new to the story, trying to get a ramp up and understand where Astera fits into the world?
Yeah. I mean, thanks for having us, Simon, and thanks, everybody, for coming and supporting us. Yeah, so the company has been around now for a little over seven years, and the way I like to kind of open it up is just saying that the vision of the company and the mission statement of the company is to solve data center networking, memory, and data bandwidth bottlenecks within the data center, so if you look at the portfolio that's been rolled out over the last couple of years, it's really been very much focused on relieving these bottlenecks, but not just relieving them, but actually providing a lot of proactive information and data and feedback to hyperscaler fleet managers to allow them to drive much more productive, higher utilization data centers, so we've been very successful at that for the first couple of years here. We've been ramping.
I think it's always helpful to kind of start with where we were a couple of years ago, where we are now, and where we're going over the next couple of years. If you look back to 2023, the initial ARES PCI Express Gen 5 portfolio ramped into volume production, most notably on NVIDIA-based platforms for HGX form factors, and that was really what drove a lot of the initial volume for Astera Labs, but in 2024, it expanded out beyond these kind of merchant GPU-based systems into systems that were supported and powered by custom ASIC accelerators from the likes of AWS, Google, and others, so that was the next kind of foray into the market. We started selling not just ARES PCI Express retimer solutions, but also our Taurus Ethernet retimer solutions and modules into these platforms for scale-out opportunities.
In 2025, a pretty landmark moment for the company as we introduced our Scorpio Smart Fabric Switch family, which is for both scale-out connectivity and for scale-up connectivity. The ramp that we've seen thus far in 2025 has been focused on scale-out, so think connectivity between CPUs, GPUs, networking, storage, and memory. And then as we move into 2026, we have a very large opportunity to ramp the X series of the Scorpio solutions, which is for scale-up, and that's going to be connecting GPUs to GPUs or AI accelerators to other AI accelerators. Beyond that, and I'm sure we'll kind of get into the longer-term vision in a minute, but obviously, everybody's moving towards a rack-scale approach to building out AI and cloud infrastructure going forward.
So moving from smaller systems to much larger, much more complex units, which puts even a bigger strain on the connectivity backbone, which we and others will hope to address here over the next couple of years. But we're really going to do it from a couple of different standpoints. We're going to provide silicon into these boxes. We'll provide modules and hardware into these systems. We'll provide software support that allows these guys to continue to proactively monitor their systems. We're going to do it over a variety of different protocols. We've seen nice revenue ramps on the PCI Express side, Ethernet, of course. But we also have some opportunities to get into some new kind of protocols as well, including UALink, which I'm sure we'll talk about shortly. We had a big NVLink announcement last week, so we can talk about that as well.
So I think the plan is here to show some broadening and some diversification beyond what the nice revenue ramp that we've showed over the last couple of years. And there's just a tremendous amount of opportunity for us to go prosecute that.
So, I think you guys have coined this term AI Infrastructure 2.0. And I feel like you sort of have alluded to that, but maybe sort of solidify that for the audience in the sense of sort of what's the old architecture and what's the new architecture and what makes it 2.0?
Yeah, so it's really that transition from looking at an AI server as being a unit of compute, which was a new thing if you look back just a couple of years ago. But it doesn't take long for the industry to continue to move on. And NVIDIA has been an amazing trailblazer for the space in terms of going and moving to a rack scale solution with the NVL72-based systems that were started to launch and ramp earlier this year. But outside of NVIDIA, I haven't seen anybody else really kind of start to build these racks at scale yet using scale-up kind of back-end domains to interconnect all these different accelerators and all these GPUs. So for us, this AI Infrastructure 2.0 is a transition towards a rack scale architecture for the broader industry.
And like I mentioned before, bringing a lot more complexity, a lot more need and demand for high-performance connectivity solutions that can not only manage the speeds that continue to increase and double, call it every one to one and a half years. The size of the boxes are getting bigger, so the distances in between the endpoints are getting longer. That puts a tremendous amount of pain on connectivity, especially when you're trying to move data very fast between these endpoints. And also just the embracing of open standards within these racks and the desire for hyperscalers and large customers to want to embrace a collaborative open ecosystem in order to drive technology velocity going forward. So we've really put our arms around this open standard approach. Obviously, we have a very nice foothold on the PCI Express side, nice ramping business on the Ethernet side.
But see open standards like UALink being very much utilized going forward as hyperscalers look to base their multi-generational design roadmaps off something that they can see and collaborate and rely on a very diverse supply chain to get product from.
And so how would you describe your competitive moat? So what's the advantage in that? If we think about the semiconductor space, there are so many companies that could potentially be playing and competing against you. And you seemingly have had a leadership position. What's sort of gotten you there and what keeps you there?
Yeah, so the initial step into the market on PCI Express Gen 5 specifically was really on the back of providing a solution that was on time, that was high performance. But I think the key differentiator was this ability to provide value-added feedback to customers in order to have them increase productivity and utilization within their systems. So when you think about the size and the scale of these data centers, the amount of boxes that are proliferating, these buildings, the amount of links, I mean, just PCI links alone that are connecting all these different endpoints. All those endpoints are coming from different vendors and configured in different ways. There's a tremendous amount of complexity involved in that.
ARES, our flagship gold standard solution for PCI Express retimers, basically allows these guys to sit and monitor each one of these millions of links that they have within their data center. There hasn't been too many, if any, AI servers that have been shipped into the market over the last, call it, four years that haven't had some type of ARES content within them. We have built a software framework that's based upon both firmware that sits on the chip and an API that's provided to customers that allows fleet managers to build dashboards and monitor specific attributes and feature sets that are important to them within all these different boxes that allow them to see what's happening on each different link and how to proactively maintain them before something really bad or destructive happens within these systems.
The name of the game for them, obviously, is to monetize all this equipment. There's tens of billions of dollars of CapEx that are being installed. And to the extent that we can even increase utilization by a couple%, obviously, it's more than paying for our solution. So for us, from a competitive standpoint, that's going to be a huge differentiator going forward. All the products in the portfolio going forward will support Cosmos as a software fleet management observability infrastructure. And that's going to be something really powerful that we leverage.
I feel that if you look at the installed base that we've been able to really kind of put out and entrench into the market as well, there's been a tremendous amount of teachings and knowledge that we've been able to establish and pick up through that process that somebody else, if they had the same exact chip, the same exact software, would have to go recreate and have somebody give them the opportunity to proliferate widely to be battle-tested. So I feel like that's going to continue to be a big hurdle for others as well.
So maybe it would help to map some of the product because you've mentioned some of the product names. Just a little bit sort of educational as to what do they do? Because I've got the sense that ARES is sort of the anchor foundational where this company got its sort of big start. And now Scorpio is sort of the cool thing that's growing fastest. But think about financial analysts, level of understanding they need to take away. But I often sort of get these questions of, prioritize these products, where's the growth coming from, what's most important? Maybe walk people through sort of the layman's version of the functions.
Yeah, so I think the exciting thing about the Astera Labs story is obviously we've had a lot of success with the early products that have ramped into the market. We have a very broad exposure. We ship to a vast majority, if not all, of the hyperscalers and AI platform providers in the space, but if you look at the roadmap going forward and what we're trying to accomplish over the next five years plus, what you'll see is an increased focus on providing more value to hyperscalers, so as you mentioned, some of these initial products, and I'll walk kind of through each of them in here in a minute, is that they served a purpose that allowed hyperscalers to accomplish a certain thing, which is to monetize their data center, and that kind of got us invited back to the party to do more.
What we've seen through that building of relationships on the ARES and Taurus side over the last couple of years has really invited us into these conversations to say, hey, this is where AI infrastructure, how we build our cloud, or how we build our AI infrastructure is going to look like over the next several years. We need you to solve this, this, and that challenge. We need to go work together to build the solutions that are going to solve those challenges of tomorrow, not what's happening today. I would classify ARES and Taurus as kind of signal conditioning solutions. Think of it as signals are moving around within a box at rapid rates, either based on a PCI Express or an Ethernet protocol. Like I mentioned before, you have a very eclectic group of different endpoints that you're ultimately trying to connect.
These signal conditioning solutions sit in the middle of these links and are able to basically take data in, repair, replicate, and move it to the next area. The reason why this is important is because the boxes are a lot bigger today than they were when you look at the general purpose compute space, kind of going back five years ago where chips were a couple inches apart. You could have situations now where they're multiple meters apart. Over the longer term, we would expect these signal conditioning solutions to see higher attach rates as a function of both increased distance and increased speed. Really, the next class of solution that we're rolling into the market is these fabric solutions that do more than just connect one endpoint to another.
They're really starting to be kind of called the traffic cop of these systems where they're interfacing with multiple different endpoints, all interacting with each other over a next generation high-speed standard like PCI Express 6, which we're supporting today. And I believe that we're the only guy shipping in any type of meaningful volume into the market today. And having that be a high-performance solution has high-end SerDes on the front end, but then is able to handle with high port counts and high lane counts, all these different endpoints within a box. So if you look at just the value that we were kind of grabbing in the signal conditioning landscape, call it a couple of years ago where we could get maybe $50-$100 worth of content on a per accelerator basis.
With Scorpio, at least on the P Series, what we've seen so far is that number expands to multiple hundreds of dollars of content opportunity per accelerator within a rack, and as we move to scale-up topologies next year with the Scorpio X Series, you're talking more complex solutions, bigger chips, more performant chips, faster solutions. You start to see that couple of hundred dollars move into many hundreds of dollars and even approaching kind of a $1,000 threshold in the kind of full grand slam if we want everything that we can provide into a box, so that's the trajectory. I'd say just we're in the business of stapling as many dollars as we can to every XPU that goes out the door. There wasn't a lot of dollars to be stapled early on, although we were very successful with our penetration.
Now the next step is to kind of grow this Scorpio business into the space and get that proliferated as widely as we can.
So I feel like your stock gets knocked around with headlines on different competitive shifts among the hyperscalers and among the XPU suppliers. And I sort of sometimes scratch my head a bit because it seems like you're everywhere. So maybe it's a question of share in different types. So help folks understand sort of your exposure, whether it's Blackwell, Tranium, and Tranium Generations, or the ASIC options, in that I think you have the ability to address all. But is sort of something more important? Is something greater content? How should we think about the importance of the different architectures and options for XPUs?
Yeah, so one of the big kind of step-ups in the company's business profile over the course of the last 12 months has been the proliferation of XPUs and custom ASIC accelerators into the market. And the big reason for that to be such a large catalyst for us, aside from just having more XPUs to attach to or to staple dollars to, is the fact that the addressable market opportunity is so much bigger in those types of environments versus an NVIDIA environment for us at the present time. Maybe that change is going forward, but we'll talk about how it's trending right now.
So the company cut its teeth in scale-out, we'd call it head-node connectivity, where there's a kind of finite amount of units and demand that sit between all these different endpoints as we're trying to clean up signals and, like I said before, do good observability and monitoring for the customers. But when you open up the scale-up back-end clustering application, then you're starting to see an environment that has a very rich attach rate relative to what we see on the scale-up side, where you can see multiple signal conditioning solutions attached to any given XPU because they all need to talk to each other. So the big difference, obviously, is that NVIDIA, within their own environments, uses a proprietary chip-to-chip protocol called NVLink, which is proprietary to them. So there's no outside vendors that can support that today.
But in other places, like some of the big hyperscalers, they're using open standards to drive their chip-to-chip interconnect strategy on the back end for scale-up. So you see some customers today and just on the verge of ramping PCI Express or customized versions of PCI Express for scale-up. So it's really that difference between the merchant GPU landscape and the XPU landscape that provides us a much bigger opportunity. And it's accelerated our growth this year. And it's not to say that we don't have good content on the NVIDIA side.
A lot of the Scorpio, a vast majority of the Scorpio growth that we've seen in 2025 has been driven by NVL72 attach, Blackwell attach, as we've seen us not only be able to, we went from shipping ARES Gen 5 solutions to shipping both ARES Gen 6 solutions and Scorpio P Series solutions, so multi-hundred-dollar content step-up in just that environment alone, but when you unlock that back-end scale-up marketplace, the attach rates go up, the complexity of the solutions go up, and you're still getting all those opportunities on the scale-out side. You're just adding all the scale-up opportunity on above it, so we obviously are very broad in terms of how we ship out into the market. We ship to all the major hyperscalers today. But it's really on the XPU side where we see the richest content opportunity and the most dense attach rates.
So there's a section in our report with the sort of subject heading Alphabet Soup because there's this sort of variety of protocols that you have to deal with. And so it feels like PCI Express is kind of home base. UALink is sort of what's ramping. And I feel like the debate is around Ethernet and flavors of Ethernet and then NVLink Fusion, which, if I understand correctly, is sort of the version of NVLink that is for others. How do you think these stack up in terms of what are their market preferences and where do you have advantages or where are you threatened?
Yeah. So I would say you referred to volatility earlier. I mean, we do see a lot of volatility in the stock. But I mean, I think a lot of it's because there's such a tremendous opportunity out there, especially when you start to think about the scale-up domain for connectivity solutions like ours and like for others. You're talking about tens of billions of dollars of potential market opportunity that is effectively zero today. So you're going to have this huge market landscape that's basically being born and that will ramp and grow to tremendous volume over the next, call it, three-to-five years. And you have folks jockeying for their position to kind of put themselves in a place where they have the best opportunity to win the biggest piece of that market as possible.
And we're all, including us very tightly, are working with each of these individual AI platform providers or hyperscalers or infrastructure builders to help support them in their decisions that they're going to make in terms of what path they want to go down. So as you correctly mentioned, you have a couple of different scale-up protocols that will be prevalent, I would say, today and over the next kind of year or two. And that would be obviously NVLink and NVLink Fusion, which the Fusion piece would be for the XPU environment. You have PCI Express or derivatives of kind of semi-custom approaches to PCI Express. And then you have standard Ethernet. So those are the kind of flavors that guys get to choose from today. And I think the reasons that they choose any of these given paths could be for different reasons.
No two clouds are the same. So they have different preferences, different focus points. If you're focused on driving low latency, you might go down that PCI Express path. If you're looking for just raw horsepower, you might go down the Ethernet path today. But I think what we are seeing in terms of the next several years going forward, and UALink kicked this off in a big way last year, was a focus on building an open standard that's purpose-built for AI. If you think about PCI Express, obviously not built for AI. This thing's been around for decades, same with standard Ethernet. So given the market opportunity, given the dollars and the resources at play, it makes a tremendous amount of sense to kind of collaborate and bring an ecosystem together to kind of drive these technology standards going forward.
UALink will be the natural extension of PCI Express going forward. We've talked about engagements with over 10 potential customers that are looking to use PCI Express for scale-up today, with a vast majority of them looking at it as a longer-term step towards UALink over the longer term. On the Ethernet side, you're starting to see some of the same movements, right? The announcement of ESUN back at OCP was basically the same move that UALink did a year ago, which was bringing a consortium together. Let's get together and collaborate on a spec. Let's get that spec out. Let's make it open. We'll have an ecosystem that can support it for diversity of supply. It's such a big space. I think what you're going to end up seeing is different players take a different path for whatever suits their profiles the best.
NVLink Fusion. If you don't have a ton of resources internally to go build your own scale-up infrastructure on the back end, you want to get to market quickly. NVLink solution could be a good solution, a good path for you as well. So I think each one of these pieces is going to have their play. We're going to try to get products in the right areas to expose ourselves to as much of the market opportunity as we can. But as of right now, just given the engagements that we see, the relationships, the multi-generational roadmaps that we're attaching to, we're confident that we're going to be playing with multiple significant players here over the next five years.
Is the right strategy for Astera to sort of be really, really good at sort of betting on a horse to win, or is it to support every protocol?
Yeah, I mean, I would say that it's listening to the customer. I mean, you've listened to Jitendra and Sanjay. One of the key pieces of the company's kind of core values since the beginning is to be intently focused on what the customer is asking for and understanding what that customer is trying to achieve and solve for, not just now, but going forward. So to the extent that customers are focused on going PCI Express to UAL over the next five years, obviously we're going to go down that path and expend significant resources to support that move. If customers come to us and say, "Hey, they want to go down an NVLink Fusion path," we have a play there. We have a very tight collaboration with NVIDIA.
And we can work with them very closely, the hyperscaler and NVIDIA, to bring a solution to market over the longer term. And then even Ethernet, we're not having anybody knock on our door at the moment to go build an ESUN switch, for example. But to the extent that happens, which it very well might, then we would push in that direction as well. So we're typically not building products to go chase markets and look for customers later. We're typically building products that we have a teaching customer that's going to help shepherd something to the finish line and then buy a significant amount of product at the end of the finish line.
So I want to pivot a little bit. You recently made an acquisition or closed an acquisition of a company. I'm going to say it, Xscale. Is that the sort of?
Xscale. It took me a while to figure that one out as well. Yeah.
Xscale.
Aixscale. So it's photonics. And the reason I think this is intriguing is that one of the risk statements is, when the world goes to co-packaged optics, do we need as many retimers? What does that do to your business? And so now you've made an acquisition to, I think, address that opportunity set. Can you help us understand how you think about the opportunity set, the timeline? The risk statement I've made, is that a valid statement? How are you thinking about it?
Yeah, no, I mean, we see it as an opportunity, and I think that we've been very consistent in our messaging around optics and CPO over the course of the last, call it, year and a half since it's been really kind of getting a lot more attention that eventually, over time, due to increasing speeds and increasing distances, that optical will have to be a solution to allow you to open up that next segment of the market, so we've been dropping breadcrumbs over the last year around, "Hey, we're going to get into optical. We're spending money, bringing resources in, hiring folks to focus on optical solution for the market," so I think that the Xscale acquisition specifically is that kind of next hint of the direction that we're going, which is clearly going to be towards the scale-up application for optics and silicon photonics.
Our view is it's still a little ways away. So I think there's some time here to develop and to build relationships with customers to understand what they want. But what we're hearing from those AI platform providers and infrastructure builders is that optics is still very expensive, less reliable, especially for a scale-up domain. It's going to be used only when it's absolutely necessary, it's an absolute necessity. So what we expect to see over the next several years is, as today, we're going to be scaling up maybe 72 accelerators within a rack. Maybe that doubles to two racks for 144 accelerators going forward, which should all be able to be supported by copper. But at some point, you might want to start clustering and scaling up across several racks, three, four racks.
And then those types of situations when you're needing to move data more than five, six, seven meters, optical is going to kind of be the only play to go down. So we're hearing from customers multi-rack scale-up will be the application that first allows for penetration for CPO or some type of silicon photonics within the scale-up market opportunity. So we're putting together the pieces of the puzzle to go penetrate that market. So we have a lot of the technology in-house today already. There's more things that we need to build and potentially partner or buy. But the glass coupling technology that came over with the Xscale guys is an intriguing piece that is kind of a novel nuanced part of the puzzle that we're hearing from customers is extremely important.
For those who don't know, it's basically the technology that's going to allow the silicon photonic chiplet to connect with the fiber media on the other side to basically engage that CPO solution with the optical connectivity on the other end. That this is a really tricky, difficult piece to try to figure out, especially at scale when you're starting to build many, many, many units and ship and deploy many, many units. So this is one where we didn't feel like we had the expertise in-house to go figure that out on our own. We found a great team in Germany that's going to help us do that, that's already started to kind of show that they can do it at scale. They have good IP that supports it. But a lot more to come here. We're going to have more announcements.
We're clearly going to kind of go down that path to try to get into this market in a big way when it starts to ramp.
I like it. I refer to optics as the one thing I actually understand. So it sort of brings it closer to home to me. But when I think about this acquisition, I sort of can envision a couple of other paths. Do you look at it as sort of a feature technology that gets incorporated into the Scorpio family? Or do you look at this as a building block to future products? How should we think about the use of it?
Yeah, so I would say that we're in a good position today to be a player in this market over the long term because we have the critical piece of the puzzle, which is the switch. And today, you're basically going to scale up and connect that switch with copper endpoints, copper traces, copper connectivity, copper links. But over the long term, Scorpio has an opportunity to kind of cross over into the optical domain as well. So optically enabling Scorpio is a key focus for us. Taking advantage of that kind of anchor socket and pull position with kind of a best-in-class switching solution is going to be the heart of the product, obviously. And then putting the pieces in around it in terms of the silicon photonic chiplet, the fiber coupling technology from Xscale to get a full system-level solution is a key piece of that as well.
It's taking what we have today, which is the anchor socket. This is one of the building blocks, like you mentioned, to kind of build out a grander solution over time.
Then Marvell recently did this acquisition, Celestial AI. It sounds similar but different. Do you look at this as competition, validation? How do you sort of think about what's going on across the industry?
Yeah, probably a little bit of both. Again, we talked about the significance of the scale-up connectivity opportunity kind of earlier in the chat, kind of being tens of billions of dollars of land grab for folks like us and others to go grab as this market starts to ramp up. And then to the extent that optical starts to play and expand that market opportunity even further, there's going to be a tremendous kind of opportunity for everybody. So clearly, those guys are making the strategic moves that they would think that they need to put into place in order to capture that market. We're going at it in a little different way. But at the end of the day, I do believe that it's a bit of validation. And ultimately, it will be some healthy competition that plays out here. Again, big markets, big customers to go capture.
We're all kind of racing to get to the finish line there.
So you know how they say time flies when you're having fun?
Yeah.
Our time's up. I want to ask one question to close out, which is, what do you consider either the least appreciated or most misunderstood aspect of the Astera story?
I mean, I think just as a function of being kind of a new public company, it's taken a little time for folks to get to know us. And we've tried to establish a track record and relationship with investors. And we feel like we've been doing a good job delivering results and trying to be conservative and move the ball forward. But I think that the biggest thing, and hopefully this has started to show itself over the last couple of years and even more going forward, is that we're really here to build a large iconic company over the long term. And the way that we're going to do that is to provide even more and more value to customers.
When we were thinking about how to build out our roadmap, how to support customers, where to dedicate and allocate resources, it's really with our arms and hands tightly coupled with our customers trying to figure out how to build this thing, not just for the next year or two, but for the next five to ten. I think a lot of the steps you'll see from us in terms of new products, new solutions, new announcements will be very much tied to how do we put ourselves in a position to be an iconic company over the next five plus years.
Great. Nick, thanks for joining us. Thanks, folks, for joining us. This was Astera Labs.
All right. Thanks.